Web and mobile services try to understand the desires and goals of users by analysing how they interact with their platforms. Smartphones, for instance, capture online data from users at a large scale and low cost.
Policymakers have reacted by enforcing mechanisms to mitigate the risks inherent in tech companies storing and processing their citizens’ private information, such as health data.
Wearable devices are now becoming a more significant element in this discussion due to their ability to collect continuous data, without the wearer necessarily being aware of it. Wearables such as smart watches gather an array of measurements on your wellbeing, such as sleep patterns, activity levels and heart fitness.
Today, there are portable devices to obtain high-quality data from brain activity, eye trackers, and the skin (to detect temperature and sweat). Consumers can buy small devices to measure the body’s responses that were exclusively available only to research institutions a few decades ago.
Although wearables are commercially focused on health monitoring, researchers have long envisioned capturing other kinds of data on a user. A computer that could collect useful information related to a person’s brain activity, heart and skin function, or their movement patterns would be able to understand a huge amount about the user.
But it’s AI that could prove a game changer. Smaller wearables combined with AI algorithms to process the data could produce tools that amplify and augment our goals and performance in life. But there are also downsides to all this information gathering.
Daily routines
Let’s imagine a world where wearables play a more prominent role daily. Smart beds could wake us up at the perfect time to feel rested by reading our body temperature, respiration and brain activity. Intelligent kitchens could help us eat more healthily, preparing a tailored diet based on chemical cues in our bloodstream (biomarkers). A smart bike would automatically change gears based on the changing inclination of the terrain, and on our fitness levels, to support an effective workout.
Smart glasses could analyse the responses of the pupils in our eyes and our overall eye movements to feed us content that we are likely to enjoy (supported by AI algorithms). Video calls could evolve into 3D full-body holograms of friends and family. Lastly, immersive entertainment could be projected in our living rooms or exist in headsets to become 360-degree experiences rather than being confined to flat screens.
Although it may seem futuristic, hardware manufacturers are already trying to move screens and devices out of our hands. For example, the Mobile World Congress 2024 showcased several smartwatches, an AI “pin” device made by the company Humane that can remove the need for a screen by projecting images onto the user’s hands, or the Air Glass 3 XR smart glasses.
Other companies have also recently released head-worn devices such as the Ray-Ban Meta, the Apple Vision Pro, or the Meta Quest 3. A device known as the Galea project is a kind of helmet that can be attached to XR headsets to capture data from facial muscles, the brain, eyes, the skin and heart.
This is clearly more invasive than a smart ring or smart glasses. It allows researchers to explore how future digital services might look if computers could access a range of data from the human body. This data would go far beyond what they can currently access – such as what we do on our smartphones.
In general, body data from wearables could fundamentally change how we interact with computers and the internet. In 2007, the audience at an Apple product launch was held in awe as Steve Jobs scrolled on an iPhone for the first time, introducing an intuitive interaction that the entire world would eventually take for granted.
Similarly, replacing smartphones with wearables and headsets would free up our hands and require new kinds of interaction with technology. Current prototypes propose using the gaze of our eyes to point and hand gestures in mid-air to click. However, this implies that these systems must continuously collect data on the user’s body.
Digital sovereignty
Large datasets based on responses from the human body could unlock the design of digital tools that weave seamlessly into our daily lives with capabilities that are highly personalised. This includes the smart bed and the intelligent kitchen that can suggest a tailored diet.
The next wave of the internet is being designed around data decentralisation – where users can potentially have greater control over how their data is used. This could prevent the misuse of personal information.
For example, the inventor of the World Wide Web, Tim Berners Lee, has been working on something called Solid. This open source initiative lets people handle their data in personal web servers and choose which organisations can access it.
Instead of making people create an account for each service they want to use, Solid would provide a protocol to build what the project refers to as personal online data stores. This would be a way to let users host their personal data on their own computer or, alternatively, choose a trusted provider to host it based on their reputation and physical location.
However, to really cement these initiatives, proactive legislation towards digital sovereignty – a person’s right to control their own digital data – would be required. This would guarantee an internet that truly takes privacy seriously.
In the era of wearables and powerful AI systems, a decentralised approach to the internet would be vital for letting citizens enjoy the benefits of these technological advances while continuing to own their data. This would move us towards the ability of citizens to make active decisions on where their data is stored, who can access it, and for what purposes.
Related posts:
Views: 0