widely deployed as part of video game engines [6]. Recordings
of gameplay are artifacts shared between users for entertain-
ment and education. These recordings are also a critical tool
for debugging video game engines and their network proto-
cols [30]. In the wider software development community, bug
reporting systems [8] and practices [36] emphasize the sharing
of evidence such as program output (e.g., screenshots, stack
traces, logs, memory dumps) and program input (e.g, test
cases, configurations, and files). Developers investigate bug
reports with user-written reproduction steps.
While we have focused on the utility of record/replay systems
for debugging, such systems are also useful for creating and
evaluating software. Prior work has used record/replay of
real captured data to provide a consistent, interactive means
for prototyping sensor processing [3, 19] and computer vi-
sion [10] algorithms. More generally, macro-replay systems
for reproducing user [31] and network [29] input are used
for prototyping and testing web applications and other user
interfaces. Dolos recordings contain a superset of these in-
puts; it is possible to synthesize a macro (i.e, automated test
case) for use with other tools. The JSBench tool [25] uses this
strategy to synthesize standalone web benchmarks. Derived
inputs may improve the results of state-exploration tools such
as Crawljax [17] by providing real, captured input traces.
CONCLUSION AND FUTURE WORK
Together, Timelapse and Dolos constitute the first toolchain
designed for interactively capturing and replaying web ap-
plication behaviors during debugging. Timelapse focuses on
browsing, visualizing, and navigating program states to sup-
port behavior reproduction during debugging tasks. Our user
study confirmed that behavior reproduction was a significant
activity in realistic debugging tasks, and Timelapse assisted
some developers in locating and automatically reproducing
behaviors of interest. The Dolos infrastructure uses a novel
adaptation of instruction-counting record/replay techniques to
reproduce web application behaviors. Our prototype demon-
strates that deterministic record/replay can be implemented
within browsers in an additive way—without impacting per-
formance or determinism, impeding tool use, or requiring
configuration—and is a platform for new debugging aids.
Prior work assumes that executions are in short supply during
debugging, and that developers know a priori what sorts of
analysis and data they want before reproducing behavior. In
future work, we want to disrupt this status quo. On-demand
replay (in the foreground, background, or offline) could
change the feasibility of useful program understanding
tools [12] or dynamic analyses [4] that, heretofore, have
been considered too expensive for practical (always-on)
use. Using the Dolos infrastructure, we intend to transform
prior works in dynamic analysis and trace visualization into
on-demand, interactive tools that a developer can quickly
employ when necessary. We believe that when combined
with on-demand replay, post mortem trace visualization and
program understanding tools will become in vivo tools for
understanding program behavior at runtime.
ACKNOWLEDGEMENTS
This material is based in part upon work supported by the
National Science Foundation under Grant Numbers CCF-
0952733 and CCF-1153625. Any opinions, findings, and
conclusions or recommendations expressed in this material are
those of the author(s) and do not necessarily reflect the views
of the National Science Foundation.
REFERENCES
1. Andrica, S., and Candea, G. WaRR: A tool for
high-fidelity web application record and replay. In DSN
(2011).
2. Cantrill, B. M., Shapiro, M. W., and Leventhal, A. H.
Dynamic instrumentation of production systems. In
USENIX ATC (2004).
3. Cardenas, T., Bastea-Forte, M., Ricciardi, A., Hartmann,
B., and Klemmer, S. R. Testing physical computing
prototypes through time-shifted and simulated input
traces. In UIST (2008).
4. Chow, J., Garfinkel, T., and Chen, P. M. Decoupling
dynamic program analysis from execution in virtual
environments. In USENIX ATC (2008).
5. Cornelis, F., Georges, A., Christiaens, M., Ronsse, M.,
Ghesquiere, T., and Bosschere, K. D. A taxonomy of
execution replay systems. In SSGRR (2003).
6.
Dickinson, P. Instant replay: Building a game engine with
reproducible behavior. In Gamasutra (July 2001).
7. Dunlap, G. W., King, S. T., Cinar, S., Basrai, M. A., and
Chen, P. M. ReVirt: enabling intrusion analysis through
virtual-machine logging and replay. SIGOPS Oper. Syst.
Rev. 36, SI (Dec. 2002), 211–224.
8. Glerum, K., Kinshuman, K., Greenberg, S., Aul, G.,
Orgovan, V., Nichols, G., Grant, D., Loihle, G., and Hunt,
G. Debugging in the (very) large: Ten years of
implementation and practice. In SOSP (2009).
9. Guo, Z., Wang, X., Tang, J., Liu, X., Xu, Z., Wu, M.,
Kaashoek, M. F., and Zhang, Z. R2: An application-level
kernel for record and replay. In OSDI (2008).
10. Kato, J., McDirmid, S., and Cao, X. DejaVu: Integrated
support for developing interactive camera-based
programs. In UIST (2012).
11. King, S. T., Dunlap, G. W., and Chen, P. M. Debugging
operating systems with time-traveling virtual machines.
In USENIX ATC (2005).
12. Ko, A. J., and Myers, B. A. Extracting and answering
why and why not questions about Java program output.
ACM Trans. Softw. Eng. Methodol. 20, 2 (Sept. 2010),
4:1–4:36.
13. Ko, A. J., Myers, B. A., Coblenz, M. J., and Aung, H. H.
An exploratory study of how developers seek, relate, and
collect relevant information during software maintenance
tasks. IEEE Trans. Softw. Eng. 32, 12 (Dec. 2006),
971–987.
14.
Kuhn, A., and Greevy, O. Exploiting the analogy between
traces and signal processing. In ICSM (2006).