Soon our children at school may speak to a hologram of a Carib Chief in the Cloud. To have a virtual conversation tomorrow with anyone about their life without them being in the room is quite possible. At the Illinois Holocaust Museum a 3-D projection of Aaron Elster answers questions at the Illinois Holocaust Museum. But Elster is elsewhere. Visitors interact with what appears to be a hologram of the Holocaust survivor created using Pepper’s Ghost illusion techniques. The installation is part of an oral history project built around survivors of the Jewish Holocaust.

Digital Humanities blur the stale departmental boundaries amongst history, language and linguistics, music, museums, artificial intelligence, film, gaming, anthropology and sound technology. It provides numerous opportunities for students working with textual, visual and multimedia content as they explore the role of digital media in modelling, curating, analysing and interpreting digital representations of culture.
Technology now makes it possible for First People descendants of Carib and Arawak holocaust survivors to give authentic testimony – now and far into the future – about the genocide of the Kalina, Warao, Kalipuna, Nepuyo, Taino and the Aruaca peoples who lived here 7,000 years ago. Fanning smoke with a feather- the Shaman among the descendants of survivors like Chief Ricardo Hernandez makes it impossible for us to forget their genocide. After revolting in Arena, many First Peoples on the run plunged to their death in the ocean. 84 were captured. 61 shot dead. 21 hanged in St. Joseph and their bodies dismembered and displayed in the streets. The massacre was in St. Joseph not Arena.

The Holocaust Museum project in Illinois took nearly three years to complete. Survivors were seated in the middle of a half dome studio filled with 100 high-definition cameras and 6,000 lights to capture them from multiple angles. The exhibit used voice recognition technology and machine learning to ask survivors’ questions about their personal ordeals and hear answers that become more relevant as the technology evolves and the system learns. Machine learning is the reason for the rapid improvement in the capabilities of voice-activated user interface systems like Alexa. Data and machine learning is the foundation of Alexa’s power, and it’s only getting stronger as its popularity and the amount of data it gathers increase. Every time Alexa makes a mistake in interpreting a request, that data is used to make the system smarter the next time around.
The Institute for Creative Technologies at the University of Southern California (USC) collaborated with the USC Shoah Foundation founded by Steven Spielberg on the Holocaust project. The aim was to preserve Holocaust and other genocide survivor testimonies. Before a visitor interacts with the ‘hologram-human’ the guest views a short video in which the survivor narrates their personal story and then quite unexpectedly the image leans forward and says- ‘But I have so much more to tell you. Now I’d like you to ask me questions.’

Like Apple’s Siri technology, the voice recognition system responds by picking up on key words from questions. The hologram can answer thousands of questions and it offers an almost life like conversational opportunity. Heather Maio, the managing Director of ‘Conscience Display’, a company that creates realistic combat scenes complete with dialogue and visuals for training drills for military personnel, envisioned the interactive storytelling exhibition. To make the hologram, Elster had to sit still in a chair for several days under lights and cameras answering more than 2, 000 nuanced questions.

At the Department of Computational Linguistics in Cambridge scientists contend that to be ‘intelligent’ a computer system must be able to learn from experience. That is, the machine must be able to learn to process information and perform tasks by considering examples without having been given any task-specific rules. The ‘neurons’ that undergird such a machine are mathematical functions containing ‘weights’ and ‘biases’. When data is inputted the neurons perform computations and then apply the weights and the biases. Today these systems have many layers and so they are referred to as deep. It is here, inside this deep black box, where algorithms evolve and self-learn that scientists do not know how the system arrives at its results. The next steps hinge on getting such machines to flag a mistake and to learn from the error as it trains itself to use data from multiple data banks and to take advantage of advances in natural language processing.

It has taken decades to understand natural human speech to the point where voice-activated interfaces like Alexa are sufficiently enabled. While there is some capability contained in the Echo cylinder such as speakers, a microphone and a small computer, its real capabilities occur once it sends whatever you have told Alexa to the cloud to be interpreted by ‘Alexa Voice Services’ which parses the recording into commands it understands. Then, the system sends the relevant output back to your device. Amazon allows and encourages approved developers free access to Alexa Voice Services so that they can create new Alexa skills to augment the system’s skill-set just as Apple did with the app store. As a result of this openness, the list of skills that Alexa (currently over 30,000) can help with continues to grow rapidly. Perhaps by 2030 our children may speak to the Chief in the Cloud.