The Future of Text |||

Contents

This Book as Augmented PDF

Foreword by Vint Cerf

Welcome by Frode Hegland Editor's Introduction Why VR, Why Now? Why Al, Why Now? The Future of Us, The Future of Text Improving not only VR Text or Al Text, but ALL Text What does it mean to be ‘In VR’? Documents in VR Metadata Matters Scale of Change Concerns Ownership & Transferability What We Are Doing The Bottom Line : Invitation & Dream Future Text Lab VR Experiments Basic reading in VR experiences Reflections on working in VR so far Suggestion for quick mass adoption of VR for work

Brief thoughts on the Future of Text in VR Tom Standage Martin Tiefenthaler Ken Perlin Bernard Vatant Stephanie Strickland Anne-Laure Le Cunff Stephan Kreutzer

20

22 22

24 24 25 26 27 28 30 30 31 32 33 33 34 34 35 36 38 44 44

46 46 46 46 47 47 48 48

Phil Gooch 48

David Lebow 49 Jim Strahorn 49 Esther Wojcicki 49 Cynthia Haynes 49 Peter Wasilko 49 Barbara Tversky 50 Michael Joyce 50 Denise Schmandt-Besserat 50 David Jay Bolter 51 Charlie Hargood 51 Jonathan Finn 51 Johannah Rodgers 52 Dene Grigar 53 John Cayley 53 Alan Laidlaw 54 Twitter Comments 55 Nova 55 Noda - Mind Map in VR 56 Jimmy Six-DOF 56 Kezza 56 Andreea lon Cojocaru 58 Borges and Vygotsky Join Forces for BOVYG, Latest Virtual Reality Start-up 58 Author's Notes 62 Journal Guest Presentation ‘An Architect Reads Cognitive Neuroscience and Decides to Start an Immersive Tech Company’ : 13 May 2022 63 An Architect Reads Cognitive Neuroscience and Decides to Start an Immersive Tech Company 63 My plan for this talk 64 Assumptions 64 ‘The Correspondence Theory of Truth’ 64 Refutations of correspondence theory of truth 64 Gap between perceptions & out there: called enaction theory 65 Cartesian anxiety 65 Varela's inactive cognition 65

Structurally coupling 66

Bees & flowers example

Frogs example from Macy papers

Implication

How the human eye is perceiving research

Hand holding a cup of hot water research

My research

Necker cube

Merleau-Ponty: fix perception to match a certain story Merleau-Ponty: World is thick with meaning

Lakoff and Johnson: metaphors are neural phenomena Varela: lay down a path in walking

Homuncular Flexibility

Virtual reality allows experience of inhabiting non-human bodies

Our identification with our body & our limbs, might really not be fixed ... spending a half a day as an octopus

Man with a cane example

Foucault's Technologies of The Self

We; now have the technology for altering “self”

Summary

Next... recent research on perception

‘Binocular rivalry or ‘homuncular flexibility’

Implication of this research for feeling like an octopus in VR Cognitive processes altered

Relations to virtual space

The Control+Z effect

Experiencing Control+Z effect as an architect

Emerging social dynamics in VR Chat

We're not the only intelligent agents today

What is virtual space

You're actually also encountering the system that is you

Designing the environment and the person are the same thing

How would we design this environment-person same thing-ed-ness? VR activates motor cortex

Use VR to test modifying cognitive-motor enaction

What purpose for these explorations of cognitive-motor enaction with VR

Implementations follow

66 66 66 67 67 67 68 68 68 68 69 69 70

70 70 71 71 72 72 72 73 73 73 74 74 75 75 75 76 76 76 76 77 77 77

O&A

Andy Campbell Dreaming Methods - Creating Immersive Literary Experiences Presentation (pre-recorded for the Symposium)

Annie Murphy Paul Operationalizing the Extended Mind

Apurva Chitnis Journal: Public Zettelkasten Limitations today Public Zettelkasten Implementation

Challenges

Barbara Tversky The Future Magnifies the Past Journal Guest Presentation: Mind in Motion

Bjorn Borud Time, speed and distance Computers and light speed Signal strength and distance The Drake equation

Our civilization

Bob Horn

Information Murals for Virtual Reality Introduction: my recent work My role as synthesizer Examples of Information Murals Overwhelmed by complexity? Why am | here at this Symposium? Text as idea chunks with subheads Benefits of small idea chunks with subheads Transition to other offerings

84

98 98 99

102 102

104 104 105 105 106 106

108 108 111

168 168 168 169 170 171

172 172 172 172 172 174 175 175 175 176

Assumption: improve human thinking 176

What can we do to move toward Einstein’s goal? 176 Problem: Show and link context 176 Show and link context...in Multiple Dimensions 177 Problem: Show process visually 177 Problem: build solid and supportive “scaffoldings for thinking” 178 Offer of help 178 Bibliography/Further Reading 178

VR Experiments with Bob Horn’s Mural 179 Bob Stein 182 Journal Guest Presentation: 4 July 2022 182 Screenshots 196 Brett Jackson 198 The evolution of mind maps for interactive VR experiences 198 Why text plays an important role 198 Idea Engine 198 Interactivity is the key 199 Your story 199 Highlights 199 Resources 200 Sourcing content 200 VR-specific considerations 200 Caitlin Fisher 202 Christopher Gutteridge 204 Daveed Benjamin 212 Thoughts about Metadata 212 Cynthia Haynes & Jan Rune Holmevik 214 Teleprompting Élekcriture 214 Works Cited 224 Deena Larsen 228

Access within VR: Opening the Magic Doors to All 228

Dene Grigar & Richard Snyder 232

Metadata for Access: VR and Beyond 232 Introduction: Proof of Concept 232 About The NEXT’s Extended Metadata Schema 233 Applying ELMS to VR Narratives 234 Final Thoughts 236 Acknowledgements 236 Bibliography 236

Eduardo Kac 238

Space Art: My Trajectory 238 Introduction 238 Agora: a holopoem for deep space 238 Spacescapes 240 Monogram 241 The Lepus Constellation Suite 243 Lagoogleglyphs 245 Inner Telescope 247 Adsum, an artwork for the Moon 249 Conclusion 250 References 251

Fabien Benetou 254

Why PDF is the wrong format to bring text to XR and why the Web with proper

provenance and responsive design from stylesheets is all we need 254

The Case Against Books 258

Interfaces all the way down 261

Stigmergy Across Media 262

Utopiah/visual-meta-append-remote.js 264 code sample 264

Journal Guest Presentation 26 November 2022 268 Pre-Presentation 268 Presentation 272 Discussion 290

Beyond The Case Against Books 313

Frode Hegland The notion that ‘everything is connected’ is damaging Depths of Connections Interacting with Connections Realities : Following Citations Realities : Following Mentions Mapping Connections Mapping The Future Metadata is Context. Context is Connection Visual-Meta Evolve The state of my text art + the journey to VR Editing Research Making it happen The case for books Robustness Book Bindings Digital Bindings Future Books ‘Just’ more displays? Stepping out Size matters Page to Page Navigation Journal: Academic & Scientific Documents in the Metaverse Metadata : Intrinsic & External

Jack Kausch Why We Need a Semantic Writing System Can there be non-sequential text?

Jad Esber Journal Guest Presentation : 21 February 2022 Dialogue Closing Comments

Gavin Menichini

Journal Guest Product Presentation : 25 February 2022 Chat Log

Harold Thimbleby Getting mixed text right is the future of text The author's experience of text Interesting aside... Mixed texts in single systems Future text mixed with Al and ...

Conclusions

Jamie Joyce Journal Guest Presentation : The Society Library Dialogue

Jaron Lanier Symposium Keynote Q&A

Jim Strahorn The Future of ... More Readable Books ... a Reader Point of View The Problem Objectives

Conclusions

Jonathan Finn 2D versus 3D displays inside VR

Conclusion

Kalev Hannes Leetaru Seeing Through Others’ Eyes: Reimagining How We Experience The News Globalization From Firehose To Awareness Falsehoods Our Ever-Evolving Language

Preservation

374 401

404 404 404 409 409 411 413

416 416 428

456 456 462

468 468 468 469 473

474 474 475

476 476 476 480 481 481 481 482

Interface

Merging Human & Machine Intelligence Search

Synthesis

Dimensionality

Interpretation & Emotion Transformation

Representation

Ken Perlin Symposium Closing Keynote: Experiential Computing and the Future of Text Presentation O&A

Livia Polanyi Virtual Vision

Lorenzo Bernaschina Gems

Mark Anderson Image Maps and VR: not as simple as supposed Abstract Background The Problem Space Display in 2D and bitmap (raster) vs. vector formats The (HTML) Image Map Raster vs. Vector Data Issues for Presentation of Infographics in VR Displaying image data in VR All surfaces are not web displays What is to be linked and where will the linked resource be found? Legacy Files—re-mediating pre-existing resources Current files—content designed for combined 2D/3D use The nature of VR interaction Tool support for linking and re-mediation

Conclusion

482 483 483 483 484 484 485 485

486 486 486 499

504 504

506 506

510 510 510 510 510 511 511 512 513 513 513 513 514 515 515 516 517

Reflections on working in VR so far

Matthias Miller-Prove On Real and Virtual Text

From Language to Text

From Text to Online

Cool Reading

Hot VR

Real Text in the Virtual World

A Vision for Text in the Virtual World Augmenting Human’s World Provisions for the Future

Mez Breeze

Artificial Intelligence Art Generation Using Text Prompts Beginnings

The Stage

The Lowdown

The Impact[s]

The Rules

Conclusions

Michael Roberts

Metaverse Combinators: digital tool strategies for the 2020's and beyond Programming using node-based languages

Combinatorial thinking

Meta tools

Information Hiding

Hyperparameters

Machine learning approaches

Moving forwards together

Conclusion

Omar Rizwan

Journal: Against ‘text’

Patrick Lichty

Architectures of the Latent Space 550

Context 550 Content 551 Phil Gooch 554 Journal Product Presentation : Scholarcy 554 Dialogue 558 Peter Wasilko 576 Benediktine Cyberspace Revisited 576 Wexelblat’s Taxonomy of Dimensions 578 Linnear Dimensions 578 Ray Dimensions 578 Quantum Dimensions 578 Nominal Dimensions 578 Ordinal Dimensions 579 Functional Dimensions 579 Visualizing, Editing, and Navigating Benediktine Cyberspaces 580 Visualization 580 Editing 580 Navigation 581 Comparing Objects 581 The DataProbe HUD An Additional Possiblity in VR 582 Future Work 583 Putting It All Together 584 Future VR Systems Should Embody The Elements of Programming 584 Requisite Affordances for Productive Work in VR 584 The VR Pane 585 The Transcript Pane 585 The Command Line Interface Pane 586 Viewspecs 586 What Can We Specify with Viewspecs? 587 Examples of Driving Complex Visualizations with a Command Line Viewspec Domain Specific Language (DSL) 587 UI Support for Discovery of the Viewspec DSL 588 The Gestalt We Are Aiming At 588 Bibliography 588

Pol Baladas & Gerard Serra

There are two great points to be shared after our practical explorations:

Sam Brooker

Supplementary Material: Devaluing the Work and Elevating the Worker

Scott Rettberg

Cyborg Authorship: Humans Writing with Al

Timur Schukin

Multidimensional

Yiliu Shen-Burke

Introducing Softspace. An initial design for a collaborative spatial knowledge

graph |. Introduction

Knowledge Synthesis Spatial Computing Softspace

. Design

Items

Content Items Container Items Transclusion

Backlinks

Spatiality Ordinospatial Layout Force-Directed Layout Cartesian Layout Workspaces Workflow Integration Common File Formats Cloud Storage Integration In-App Web Browser Multiuser Support Interaction Model

590 590

592 592

596 596

598 598

600

600 601 601 602 602 603 603 603 603 604 604 605 605 606 606 606 607 607 607 608 608 608

Augmented Reality Hand Tracking Locomotion Manipulation Text Input Art Design

IIl. User

IV. Flow Workflow Phases Example Flow

Journal Guest Presentation : Discussing Softspace

Yohanna Joseph Waliya Post Digital Text (PDT) in Virtual Reality (VR)

Stephen Fry In closing: A Prediction

Appendix : History of Text Timeline 13.8 Billion Years Ago 250 Million-3.6 Million 2,000,000-50,000 BCE 50,000-3,000 BCE 4000 BCE 3000 BCE 2000 BCE 1000 BCE 1 CE 100 200 300 400 500 600 700 800 900

609 609 609 610 610 611 612 613 613 613 615

644 644

646 646

648 649 649 650 650 651 651 652 653 654 654 654 655 655 655 655 655 656 656

1000 1100 1200 1300 1400 1500 1600 1700 1800 1810 1820 1830 1840 1850 1860 1870 1880 1890 1900 1910 1920 1930 1940 1950 1960 1970 1980 1990 2000 2010 2020 Future

Contributors to the Timeline

Symposium Gallery

656 657 657 657 657 658 659 660 661 662 662 662 662 663 663 663 664 665 665 665 666 666 667 668 670 672 675 680 685 688 690 690 691

692

Book Launch Remarks State of the art What is this space? Change Soon the environments of books will change But only if we have the technical infrastructures This book, with metadata Fabien, the next step Join us Only if we have the mental infrastructures The future? Thank you Discussion

Coda Edgar Glossary Endnotes References

Visual-Meta Appendix

702 702 703 704 704 704 705 706 706 707 708 709 710

738

759

760

798

807

811

17

The Future of Text Volume ||| December 9th 2022

All articles are © Copyright 2022 of their respective authors. This collected work is © Copyright 2022 Future Text Publishing & Frode Alexander Hegland.

Dedicated to Turid Hegland.

A PDF is made available at no cost and the printed book is available from ‘Future Text. Publishing’ (futuretextpublishing.com) a trading name of ‘The Augmented Text Company LTD, UK. This work is freely available digitally, permitting any users to read, download, copy, distribute, print, search, or link to the full texts of these articles, crawl them for indexing, pass them as data to software, or use them for any other lawful purpose, without financial, legal, or technical barriers other than those inseparable from gaining access to the internet itself. The only constraint on reproduction and distribution, and the only role for copyright in this domain, should be to give authors control over the integrity of their work

and the right to be properly acknowledged and cited.

https://do1.org/10.48197/fot2022 ISBN: 9798367580655

18

What this is: This publication has grown out of a decade of the annual Future of Text Symposium. The symposium & book is an experiment and experience, as is everything we do. All transcripts of live presentations are edited and video links included.

Bold text in transcripts is by the editor, sometimes by the speaker.

http://futureoftext.org

19

This Book as Augmented PDF

This book is available in printed form and as a PDF document with ‘Visual-Meta’ metadata,

developed by the editor, Frode Hegland. If you choose to read it in our free ‘Reader’ PDF

viewer for macOS (download’*), you can interact with the document in richer ways than you

normally could.

You can read more about what Visual-Meta brings to metadata here: visual-meta.info This work will also be made available in other formats for developers who would like to experiment with how we can interact with this book of a quarter million words. This will be in liquid. RTFD and JSON. You can download these directly from our website for as long as the website is live: http://futureoftext.org

N

Reader

Downloaded Reader for free: https://www.augmentedtext.info

20

Augmented Navigation

e Fold into Outline of headings (cmd- -) e Right and left arrow to go to next and previous page

e Down and up arrow for next and previous article (level 1 heading) | 1?

Augmented Citations

e Click on a citation [in square brackets] to see the citation information

e Copy text which will be pasted as a citation if using a Visual-Meta aware word processor, such as out own ‘Author’ :https://www.augmentedtext.info and it will also paste as a useful

citation in other writing systems

e Ifyou export a PDF from Author which has a citation to text in this book, the resulting PDF will not only let the user click on the citation (as above), but your reader will also be able to click to load this book to the page you cited, if they have the book downloaded

Augmented Find & Glossary

e Select text and cmd-F to see only where that text occurs in the document

...If the selected text has a Glossary entry, that entry will appear at the top of the screen

21

Foreword by Vint Cerf

For nearly a decade, the Future of Text group has focused on interactions with text as largely a two dimensional construct. The interactions allowed for varied 2D presentations and manipulations: text as a graph, text with appendices for citation and for glossaries, text filtered in various ways. In the past year, the exploration of computational text has taken on a literal new dimension: 3D presentation and manipulation. One can imagine text as books to be manipulated as 3D objects. One can also imagine text presented as connected components in a 3D space, allowing for richer organization of context for purposes of authoring, annotation or reading. The additional dimension opens up a richer environment in which to store, explore, consume and create text and other artifacts including 3D illustrations and simulated objects. One can literally imagine computable containers as a part of the “text” universe. Active objects that can auto-update and signal their status in a 3D environment. Some of these ideas are not new. The Defence Advanced Research Projects Agency (DARPA) funded a project called a Spatial Database Management System at the MIT Media Lab in which content was found in simulated filing cabinets arranged in a 3D space. One “flew” through the information space to explore its contents. What is new is the development of high resolution 3D headsets that have sufficiently high resolution and sensing capability so as to eliminate earlier proprioceptive confusion that led to dizziness and even nausea with

extended use.

The virtual environment these devices create permit convenient manipulation of artefacts as if they existed in real space. One of the most powerful organizing principles humans exhibit is spatial memory. We know where papers are that are piled up on our desks (“about three inches from the top...”). VR environments not only exercise this facility but also allow compelling renderings of information, for example, highlighting relevant text objects in response to a search. Imagine walking in the “stacks” in a virtual library and having books light up because they have relevant information responsive to your search. One could assemble a virtual library of books (and other text artefacts) from online resources for purposes of preparing to engage in a research project. Could we call this an information workbench or machine shop? Because of the endless possibilities for rendering in virtual three-space, there seem to be few limits to a textual “holodeck” in which multiple parties

might collaborate.

22

We are at a cusp enabled by new technology and techniques. The information landscape

is open for exploration.

Vint Cerf @ The 11th Future of Text Symposium. Hegland, 2022.

23

Welcome by Frode Hegland

Along with my co-curators Vint Cerf, Ismail Serageldin, Dene Grigar, Claus Atzenbeck and co-editor Mark Anderson, I welcome you to ‘The Future of Text’ Volume 3, where we focus primarily on text in virtual environments (VR/AR) and text augmented by AI. In other words,

text in 3D space and text in latent space. This volume of The Future of Text includes:

e Presentations from the 11th annual Future of Text Symposium held on the 27th and 28th of September 2022 online and at The Linnean Society in London, either as transcripts or articles independent from presentations. Where presenters used images, they have largely been included here. No copyright infringement intended. If there is an issue of rights, please contact us.

https://thefutureoftext.org e Articles from our Journal & Transcripts from Monthly Presentations.

https://futuretextpublishing.com

The hope is that this work will inspire you to think richly and deeply about a future where text is freed from the traditional flat rectangle. Soon we will live in a world where VR is just part of our daily experience. We have a brief opportunity left to dream of what this can be before big companies release their headsets and realise some of this potential. We now have an obligation to use the power of our imagination to think of alternative futures, un- clouded by the corporate implementations. Together, I think we can dream of amazing futures

which can inspire future generations who will have lived with VR all their lives. We start

with a slightly paraphrased quote from a relatively obscure Apple Macintosh commercial? from the 1990s: “The only limits will be the size of our imagination and the degree of our dedication.” Thank you for being a part of this journey. We can only truly improve the future of text if we do it together.

Frode Alexander Hegland | frode@hegland.com | Wimbledon, UK 2022

24

Editor's Introduction

VR (including AR) is about to go mainstream and this has the potential to offer tremendous improvements to how we think, work and communicate.

There are serious issues around how open VR work environments will be and how portable knowledge objects and environments will be. Think Mac vs. PC and the Web

Browser Wars but for the entire work environment.

The potential of text augmented with AI is also only now beginning to be understood to improve the lives of individual users, though it has been used in various guises and under different names (ML, algorithms, etc.) to power fantastic services (speech understanding, speech synthesis, language translation and more), as well as social networks and ‘fake news’ for years.

More important than the specific benefits working in VR will have, is perhaps the opportunity we now have to reset our thinking and return to first principles to better understand how we can think and communicate with digital text. Douglas Engelbart, Ted Nelson and other pioneers led a ‘Cambrian Explosion’ of innovation for how we can interact with digital text in the 60s and 70s by giving us digital editing, hypertext-links and so on. But once we, the public, felt we knew what digital text was (text which can be edited, shared and linked), innovation slowed to a crawl. The hypertext community, as represented by ACM Hypertext, has demonstrated powerful ways we can interact with text, far beyond what is in general use. Still, the inertia of what exists and the lack of curiosity among users have made it prohibitively expensive to develop and put into use new systems.

With the advent of VR, where text will be freed from the small rectangles of traditional environments, we can again wonder about the possibilities. This will unleash public curiosity as to what text can be once again.

To truly unleash text in VR we will need to re-examine what text is, what infrastructures support textual dialogue and what we want text to do for us. The excitement of VR fuels our imagination again just think of working in a library, where every wall can instantly display different aspects of what you are reading, having the outlines, glossary definitions and images from the book framed on the wall, all the while being interactive for you to change the variables in diagrams and see connections with cited

sources. This could be inspiring or distracting but the key is you can change it at a whim.

This is an incredibly exciting future once headsets get better (lighter, more comfortable, as well as better visual quality). Because this cannot happen without

fundamental infrastructure improvements, what we build for virtual environments—V R—will

25

benefit text in all digital forms. This is important.

The future of humanity will depend on how we can improve how we think and communicate and the written word, with all its unique characteristics of being swimmable, readable at your own pace and so on, will remain a key to this. The future of

text we choose will choose how our future will be written.

Why VR, Why Now?

My starting position is that VR, sometimes also called ‘metaverse’ these days and ‘cyberspace’ before, is about to go mainstream.

This is based on Meta Quest 2, which is available for the mass market and currently outselling the Microsoft Xbox game consoles. It is just the start of what VR headsets will be able to offer. The view inside such a headset is already rock-solid, whatever environment is present, it looks like it is there, right in front of you. With Apple’s headset coming next year and improvements coming along as we have seen with personal computers, smartphones and smartwatches, this will rapidly continue to improve to the point where the visual fidelity becomes high and the discomfort low.

The future is coming fast. It is worth emphasising that in the same way the room-sized computer was not really a clear precursor to the smartphone, the current bulky, low-resolution and narrow field-of-view devices do not illustrate what in the near future will feel lightweight and the visual quality will approach photo realism—it will feel like the world is transformed-it will not feel like we are wearing a heavy headset.

What this will unleash we do not know, but what I do know is that we, as a wider community of authors and readers of text, need to get involved in thinking about—dreaming and fantasising—about what it can be. For starters, we will not be using headsets all the time, any more than we now only ever use a smartphone or a desktop/laptop. We will enter VR when we need to focus on something, similar to how we enter a movie theatre, or turn on a large, flat screen TV when we want to be immersed or watch general video ‘content’ on all our devices.

The distinction between VR and AR will likely become different modes on the same device but will have very different uses. Where AR refers to the world, VR will refer to any world. There is also an interesting middle ground, where the view of the world is superfluous, and it is just there for a sense of place, where the knowledge objects being interacted with are in a space, and the background could be anywhere. This is demonstrated in Yiliu Shen-

Burke’s work where the user can interact with a constellation of knowledge, and the

26

background is simply a background, even though it is a live video of the user's room. There is also what is referred to as ‘reverse AR’ where the whole room environment is synthetic but

the main object in the room is real, as built by the team at Shopify to let shoppers try a chair

and then look at the room as though they are at home’. There is a lot of creativity as to where boundaries will be and it will only become more and more interesting.

We had a historic opportunity to re-think text in the 1960s, and now we have another. This is a once-in-a-lifetime, once-in-a-species point in time. We are only a few years away—if that-from VR headsets becoming commonplace. The dreams of Doug Engelbart and Ted Nelson, among other true pioneers, have not had a place to put their feet over the last few decades. There has not been a foundation of need for improved text interaction from people. Now there is. With VR, it’s easier to see that there are new ways of working. Quite simply, we have an opportunity to dream again. ‘VR’ won’t be ‘VR’ for long, same as ‘hypertext’

became the web then became just ‘online’. ‘VR’ will become ordinary very soon.

Why Al, Why Now?

The further assumption is that AI will continue to advance. We are looking at is the emergence and improvement in automatic pattern recognition, classification, summarization, extrapolation, and natural language query-based information extraction for everything from speech to text and text analysis. We are also keeping an eye on the development of Self- Aware Artificial General Intelligence with a mixed-initiative conversational UI, since it never hurts to dream far into the future.

Al, if left unchecked, can present real dangers for society, as seen already in the basic AI algorithms which shape social media interactions and more.

AI can expand our understanding of creative expression. In this volume we have the experience of Mez Breeze who explores the art of AI and associated text-driven potentials.

One useful way to think of AI is as a digital map. I came to think of this when my 5 year old son started navigating for us when driving in Norway this summer. Since the map was not un-augmented paper but a digital map on an iPhone, he was helped by always knowing our location and there was always a blue line suggesting where we should go, so he could tell me ‘right’, ‘left’ and what exit to take off a roundabout, in his youthful happy voice. The map did not dictate where we went, we could always choose a more scenic route if we felt like it, and the blue line would update its suggestions.

More than anything, AI has been largely ignored when it comes to text. The Apple

Watch I use I can rely on to accurately understand my commands, which is quite mind-

27

blowing. I have refined speech to text in my macOS word processor ‘Author’ to take advantage of Apple’s increasingly powerful API. Some software provides coloured grammar when required and some suggest changes to writing style. There are of course relatively brute force AI analysis of masses of academic documents and there are writing tools which will write based on supplied text, such as GPT-3, but I suspect this is really just the snowflake on the top of the iceberg of what is possible.

What live analysis can a knowledge worker hope for when writing? How about hitting cmd-? and getting a list of suggested next paragraphs (not the less-then-helpful-help-menu). Maybe there are a few suggestions, one based on what the author has typed so far and the author’s own body of work, one based on what’s typed so far but including all known documents in the author’s field and a third maybe also including what’s found on the web? This is the digital map approach, giving the user guidance, but not dictating. This is work

currently undertaken by Pol Baladas on Fermat, for example.

Al is both ‘just beyond the horizon’ and also becoming mundane so it is valuable to try to understand, then to revise our understanding, of how AI can augment our interactions with

text.

The Future of Us, The Future of Text

2022 is the year of a continuing pandemic, along with economic collapse, inequality, a significant war in Europe which threatens the stability of countries near and distant, as well as the underlying climate change catastrophe we are now seeing starting to make an impact on our daily lives.

There is no question that if we are to survive, let alone thrive as a species, we need to improve the way we communicate and relate to each other. This will mean looking at how we can improve education, politics, scientific discourse and even how we can bring our spiritual practices into play to improve, quite simply, how we get along as people, how we develop shared goals and how we deal with conflict.

Much of dialogue, from politics, law and international treaties, to social media, lab reports, journal articles and personal chat, is in the form of text. I believe that we have to improve how we interact with textual knowledge, otherwise we will be manipulated by those who do, such as social media companies, and we will continue to be overwhelmed by the sheer volume of information. We cannot rely on face-to-face speech and video alone. We

have to improve what text is, how we can interact with text and how we can represent text.

From its invention almost five and a half thousand years ago, the written word has

28

proven remarkably powerful in augmenting how we think and communicate. The transition to digital text has transformed text, a medium which before becoming digital was primarily about fixity, about thoughts being securely placed on a substrate. When text became digital, this attribute largely vanished, with text now being interactive. A user could easily delete any

text, cut & copy and edit the text freely, giving text a much more fluid character.

What was initially a revolution when the editability, and soon after the linkability, of text became part of our daily lives, the magic of what was previously referred to as ‘hypertext’ simply became ‘text’, and analog text, previously only referred to as ‘text’, became ‘print out’ or ‘hard copy.’ The magic of digital text became mundane.

Other digital media continued to develop however. This was all the while digital images went from wireframes to photorealistic and games went from abstract ‘asteroids’ to deeply immersive and interactive experiences. We collectively thought we knew what text was, and little innovation took place. However, as digital text proliferated at an astounding pace, overwhelming those trying to stay on top of research, social media companies and those seeking to influence popular and political opinion went to work creating powerful tools for textual persuasion. We got social media echo chambers with algorithms designed to provoke, to increase ‘engagement’ (and thus ad views resulting in greater revenue) and modern ‘fake news’ at the start of the war in Ukraine in 2014, when Russian intelligence flooded digital mass media and social networks with fake and real news to the point where it became difficult to discern what was actually going on. Fake news continued to influence people’s opinions at the same time as research documentation stayed hardly digital, with little interactions afforded to the user. There are many issues to be discussed in this paragraph and I'd be very happy to go through them in person, but the point is simple: Text interactions became sophisticated where there was an incentive to invest in it in the form of money and political control. Where the greatest benefit to the end user could have been seen, there has

been little innovation or investment.

We had a historic opportunity to re-think text in digital form but we dropped the ball. We don’t have the ability to ‘fly through cyberspace’. We have the ability to cut and paste in Word, click on one-way, one-destination, un-typed links and edit a document together in Google Docs. We could do more, much more. We could imbue all documents with rich and robust metadata. This is a personal issue for me. We could provide authoring and reading software as powerful as Apple Final Cut. We could have reached for the stars, but the market and the few companies making text-focused software decided on ‘ease of use’, and we were

left with big buttons to click on.

29

Improving not only VR Text or Al Text, but ALL Text

It is important to point out that the opportunity is not just about working in VR or using AI augmented text.

The real opportunity is that we will have an opportunity to rethink everything with digital text because the public’s imagination will be energised—all text can benefit from a re-

think and new dreaming.

It is clear that while text in documents will continue to matter, it will not just be text ‘floating in space’. It is also clear that better metadata will make text more usefully interactive on traditional digital displays as well. This is a historic opportunity primarily because we can restart and think from first principles: how to connect people and how to help us think with symbols/text. Our planet and our species is facing serious threats so it is

important that we learn from the past and that we are not shackled by the past.

We need to look at how we can usefully extend our cognition to better think with other minds, as Annie Murphy Paul discusses in her book The Extended Mind [2] and in her talk in this book. Jaron Lanier—the man who embodies VR- and who presented the keynote at the

Future of Text Symposium puts it “The solution is to double down on being human’,

The solution is at the same time to extend our mental faculties to really take advantage of the flexibility of representation and interaction these future environments will offer us. Just as we are today hamstrung by being tied to the models of paper documents, we must expand our minds in entirely new ways to get the most benefit out of what can now be created. This will mean building systems which connect with our physiology to learn to ‘read’ and ‘write’ in entirely new ways. Think how text seems entirely artificial if you take a human’s situation 100,000 years ago, but it seems natural today. Text is only lines on a substrate. What will be the future of text when the entire visual, aural—and soon haptic—field can be used for

expression and impression?

What does it mean to be ‘In VR’?

Virtual environments will feel more like rooms or full environments than what we think of as textual ‘documents’ today. There will be intricate models of microscopic creatures for us to explore, we will be able to walk through cities ancient, modern and futuristic. We will also be able to step into spaceships and explore entire planets and more. This will be exciting, and valuable, and it will take teams of people a serious investment in time, energy and money to

build these experiences. A great example is the work of Bob Horn who extends murals into

30

multiple dimensions which at first glance is just an image shown large in VR but on further interaction becomes so much more than it could have been if it was simply printed onto a wall. We will also have new ways of telling stories, as Caitlin Fisher who works on the opportunities for more immersive storytelling in VR} discusses in this book. The opportunities are vast for what we can be in virtual environments but for this book and this project we are looking at text primarily, which will include many types of packages and

experiences, one of which will remain a kind of book.

Documents in VR

One of the key questions we ask is: What is a document in virtual reality, and more specifically, what is an academic document in VR and what does it become with AI

augmentations?

We look at academic documents as a special case since academia is a field connected by documents and it is also a field where what is in the documents needs to interacted with

and connected.

This is distinct from commercial books where the owners of the intellectual property have reason to restrain the use of the text and is therefore a different strand of the future of text, one with constraints outside of what we are currently