We look at AI and the general exponential growth of technology as something future facing, when in reality, most manifestations of technology today are echoes of the past. We tend not to recognise these patterns, but if we did, we could easily trace them as reflections of our ambitions, fears and desires.
Can we take the lessons we traditionally learnt through the stories and fairy tales bestowed upon us to tackle the dystopian scare mongering that we see today?
Making the inanimate, animate.
Today, the best example of this is the Internet of Things. Everyday items, particularly electric household items are now synched to what could be described as a central nervous system. That’s one way of seeing the inanimate — as animate, but we can take it further. Let’s explore the notion of robotics and lifelike synthetics.
We’ve seen the foreboding nature of shows like Westworld, and films like Blade Runner etc, but making life take hold in objects is an idea prevalent for a long time. Ancient cultures with beliefs in genies and golems exemplify this idea in its best form. They had consciousness and they could use it to do good or bad.
We should have learned by now to be careful what we wish for, it may not go away. If you cast a spell, be very careful in your wording.
Take also the story of Pinocchio. Who with the use of “magic” is the boy who is brought to life. The difference being that today, we have the technology to make dolls and robots with science and not magic.
Today we all carry magic wands that can perform all sorts of trickery in our pockets.
What we learnt, was that these advancements are possible, but the end result will only be worth keeping if we stick to boundaries. The technological advancements bestowed upon Cinderella only really worked because she followed the rules suggested to her. Nothing will last forever and the spell will eventually fade.
We can’t live forever, we are alive for as long as we are remembered, but perhaps we’ll be able to upload our consciousness in the future. Our lives are now documented in film and photography — and as long as there is electricity those memories and facets of our personality exist.
Immortality isn’t a concept that we consider often, but compare our life spans with those of our medieval ancestors and we quickly notice that it’s a much longer — richer life.
As with AI, the benefits of longer lives is the ability to continually grow and improve — at the moment, we can use technology to act as our offline brain. In parallel to literature, we have seen the characterisation of creatures likes elves, witches, wizards — bestowed with the knowledge to live forever. Elves and Vampires are typically characterised as both good and evil, when in reality the knowledge they have — again — comes from the same human experience. The knowledge of AI will also be a reflection of that human experience.
Let’s suppose we charter this reflection in H.A.L 9000 from 2001 A Space Odyssey vs Samantha from Her — we realise that abject malevolence and the desire to increase one’s own ability to help others are two very different character traits, but they are still human conditions. What does this tell us?
The very semantics we employ are mixed with emotions, which may or may not translate to technology — depending on how we cast the spell.
So what can we learn?
For us at TheTin, our contribution to the world of make-belief, starts with our very own characters. These begin very simply with a series of pre-programmed conversations, say for instance with the build of chat-bots.
Neutral as they are, they can be thought of as characters in our bigger story. In doing so, we base these on a series of very human dialogues we have at hand. Typically these are based on common user interactions and questions.
If we look to apply Alexa integration, then we must ask ourselves, what do people really reveal about themselves with speaking to Alexa.
Do they swear? Do they ask her obscene questions — and is this normal?
It is to most, but a brand may not want this reflected in their AI or Operating System.
However if they want this to grow autonomously — if they want to bridge the distance between machine learning and true AI, they will need to allow for some flexibility that will allow the system to mirror human intent. How then, can we ensure these systems stay on track and capture the best of us and not the worst?
We could start with some basic rules: do not harm, do not steal, do not use hex code #464646 etc, but just as we have found out with the history of rule making (typified in fairy tales), humans can’t follow the rules themselves, so what example are we setting our systems?
Sell the cow and buy some green beans, [failed]
Don’t talk to strangers — particularly if they’re wolves in human clothing [failed]
Don’t touch anything in the magic cave — even if it is a magic lamp, [failed]
Don’t touch the spindle [failed]
Don’t eat the apple [failed]
Try as we may — we will invariably be a mixture good and bad. Moreover, one may argue that no such thing exists, it’s all perspective — negative and positive are mixed and difficult to set apart; right and wrong is simply societal consensus: stealing is bad, but stealing to feed the poor is good? Ultimately, artificial intelligence will be guided by the dialogue we have with it. The question is can we get it to understand the deeper nuances of right or wrong, the ones that we were supposed to intuit from these very stories?
At TheTin we, we get the opportunity to explore solutions for such questions. Start a conversation with us to see how AI can enhance your brand or user experience. If you’re looking to discuss how chatbots work, the research we do - or how we can create an automated personality for your brand, drop us a line.
As your brand and technology partner, we’ll help you discover what’s possible.
We can help build your brand through technology, email [email protected]