PaperProof is a paper-digital proof-editing application that allows users to edit digital documents based on gesture-based markup of printed versions. It interprets the pen strokes made by the users on paper and can automatically execute the intended editing operations in the digital source document.
PaperProof operations may be executed either in real-time to support users reviewing documents at their workplace or at a later time if the user is currently on the move and does not have ready access to a digital version of the document. This enables users to switch seamlessly back and forth between paper
and digital instances of a document throughout the document lifecycle working
with whichever medium is preferred for a given task.
PaperProof is able to maintain a mapping between the printed and digital document instances that allows gestures such as a stroke through a word to be mapped back to the corresponding structural word element within the document. Further this allows us to maintain this logical mapping even if the digital instance of the document has been edited in parallel.
Currently, PaperProof is integrated with the open source word processing tool OpenOffice Writer. The iDoc publishing framework supports the transformation of the digital document into an interactive paper document. The interaction with the interactive document is based on Anoto digital pen and paper technology and is implemented using the iServer/iPaper framework.
PaperProof offers a gesture-based interface to trigger editing operations on the corresponding digital instance of the document. It offers a set of five proof-editing operations: insert, delete, replace, move and annotate.
The editing commands
are triggered by an ordered sequence of one or more pen-based gestures
optionally followed by the user’s textual input. We use the iGesture framework to recognise the specific gestures and the
MyScript Intelligent Character Recognition from VisionObjects to translate the textual information into
digital string representations. An identified operation is stored in a special
buffer until the paper-digital synchronisation is performed. Optionally, an acoustic and/or visual message informs the user when the gesture recogniser has identified an operation.
The following operations and corresponding gestures are defined in PaperProof:
To issue a delete command the user simply needs to sketch a horizontal line gesture striking through the content to be removed. The corresponding digital entities are also eliminated.
The replacement of information is performed by first marking the content to be removed with a horizontal line gesture. Next the user writes the substitute text on paper. The ICR result is used as a replacement for the original information within the digital source document.
To insert content, first a user has to specify the position drawing an inverted caret gesture. Next, they write down the new information which is recognised by the ICR software and inserted into the digital document.
To annotate structural elements such as paragraphs or sentences, the user first has to enclose them between the opening and closing horizontal angular bracket gestures. Next, the annotation is written on paper. Finally, the ICR recognised textual information is added to the original document.
PaperProof also supports side annotations. In this case, the target of the annotation is indicated by a vertical line gesture. As for the previous operation, the annotation is handwritten next to the annotated content and in the end inserted into the digital artefact.
To move specific pieces of information to different positions, another composite gesture is used. First, the target entity is marked by sketching an enclosing pair of opening and closing angular bracket gestures. Then, the user points to the new location of the designated content with an upwards vertical line gesture.
Beat Signer, Fundamental Concepts for Interactive Paper and Cross-Media Information Spaces, ISBN 978-3-8370-2713-6, Hardcover, 276 Pages, May 2008 (first published 2006 as Diss ETH No. 16218)
Nadir Weibel, Adriana Ispas, Beat Signer and Moira C. Norrie, PaperProof: A Paper-Digital Proof-Editing System, In Proceedings of CHI 2008, 26th SIGCHI Conference on Human Factors in Computing Systems: Interactivity Track, Florence, Italy, April 2008
Nadir Weibel, Beat Signer, Patrick Ponti and Moira C. Norrie, PaperProof: A Paper-Digital Proof-Editing System, In Proceedings of CoPADD 2007, 2nd Workshop on Collaborating over Paper and Digital Documents, London, UK, November 2007
Beat Signer, Ueli Kurmann and Moira C. Norrie, iGesture: A General Gesture Recognition Framework, In Proceedings of ICDAR 2007, 9th International Conference on Document Analysis and Recognition, Curitiba, Brazil, September 2007
Beat Signer, Moira C. Norrie and Ueli Kurmann, iGesture: A Java Framework for the Development and Deployment of Stroke-Based Online Gesture Recognition Algorithms, Technical Report ETH Zurich, TR561, September 2007
Nadir Weibel, Moira C. Norrie and Beat Signer, A Model for Mapping between Printed and Digital Document Instances, In Proceedings of DocEng 2007, ACM Symposium on Document Engineering, Winnipeg, Canada, August 2007
Moira C. Norrie, Beat Signer and Nadir Weibel, General Framework for the Rapid Development of Interactive Paper Applications, CoPADD 2006, 1st Workshop on Collaborating over Paper and Digital Documents, Banff, Canada, November 2006
Diese Website wird in älteren Versionen von Netscape ohne graphische Elemente dargestellt. Die Funktionalität der Website ist aber trotzdem gewährleistet. Wenn Sie diese Website regelmässig benutzen, empfehlen wir Ihnen, auf Ihrem Computer einen aktuellen Browser zu installieren. Weitere Informationen finden Sie auf
The content in this site is accessible to any browser or Internet device, however, some graphics will display correctly only in the newer versions of Netscape. To get the most out of our site we suggest you upgrade to a newer browser.