October 22, 2020
Artificial Intelligence, Emerging Technology
AI, Neuralink, and the Evolution of Human-Machine Interfaces
Humans have enjoyed an intimate and physical relationship with technology and how they have interacted with tools throughout time. Throughout history the tools created by people have the same two basic characteristics: tools require that a user manipulate them through some physical input, and the tool or some connected object will respond accordingly and provide user feedback via an externally observable output. For example, at the dawn of time a person might hit a sharp stone tool against a branch (the input) and then see the tool’s effect from the cuts created in the bark (the externally observable output).
Fast forward to more recent times where people use more sophisticated tools in the form of computer systems. Even these modern digital tools required people to manipulate them though physical inputs (mice and keyboards) and observe the feedback through an externally observable output (monitors). This paradigm of the directly physical Human-Machine interfaces began to change in the past few decades with the rise of Artificial Intelligence (AI) capabilities.
AI has introduced an era where computers are able to not only work through direct physical manipulation, but where the tools themselves are able to observe and make predictions about a user’s intent. For example, computer vision systems record peoples’ natural kinesthetic body movements and gestures, use AI to interpret those gestures into requested actions (input) and display the results on a screen (for example scrolling through a menu). These systems still rely on some physical input from users, although there is not the same direct physical connection. Digital assistants such as Apple’s Siri, Amazon’s Alexa or Google’s Home similarly require a physical input, although a very light touch in the form of a breath of air for a voice command.
On August 28, 2020 Elon Musk and the Neuralink team presented a live demonstration of Neuralink’s latest experimental technology – a device which can be implanted in a skull to both read signals as well as introduce electrical impulses into the outer cortex of a brain. In the presentation, Neuralink showed a video of a pig with a Neuralink implant walking on a treadmill. Predictive algorithms attempted to determine the test-subject pig’s position and motion with surprising accuracy. Although early in development, this technology offers a view into a possible future where the tools of our digital world no longer require physical manipulation for an input or an externally observable output. It is an interesting world to imagine in my mind’s eye – even as I type this observation out on my physical keyboard.
Seth Brooks is a Vice President in FCAT.References & Disclaimers
1 See Neuralink Progress Update, Summer 2020, available at https://www.youtube.com/watch?v=DVvmgjBL74w
948978.1.0
Related posts
Humanity-Centered Design: An Interview with Don Norman
John Dalton
February 14, 2023
FCAT had the pleasure of welcoming Don Norman, author of Design for a Better World, for a speaking event where he presented an eye-opening diagnosis of how human behavior has led to numerous societal crises from collapsing social structures to climate change. Norman, both a scientist and business executive, proposes how we can reconsider what’s important in life and how that new way of thinking can help save humanity. As a sneak peek to his new book, FCAT’s VP of Research, John Dalton interviewed Norman to dive into his philosophy of humanity-centered design.
Ask an FCAT Researcher: Anjali Lai on Parenting in the 21st Century
John Dalton
June 27, 2025
“Parents potentially need different kinds of guidance for their financial planning, both when they are preparing for a child and then later, as they support them through life.”
Human Centered AI: Q & A with Ben Shneiderman
John Dalton
August 18, 2022
The remarkable progress in algorithms for machine and deep learning have opened the doors to new opportunities, and some dark possibilities. However, a bright future awaits those who build on their working methods by including human-centered AI strategies of design and testing. As many technology companies and thought leaders have argued, the goal is not to replace people but to empower them by making design choices that give humans control over technology. FCAT recently hosted University of Maryland Professor Ben Shneiderman as a guest of our Speaker Series. Shneiderman is a trailblazer in the field of human-computer interaction. He is credited with pioneering the use of clickable highlighted weblinks, high-precision touchscreen keyboards for mobile, devices, tagging for photos, and more. FCAT’s John Dalton was able to catch up with the professor for a brief Q&A.