via AI.Nony.Mous
Imagine, if you will, a digital doppelgänger. A clone that looks, talks and behaves just like you, created from the depths of artificial intelligence, reflecting your every mannerism with eerie precision. As thrilling as it might sound, how would you feel about it?
Our research at the University of British Columbia turns the spotlight onto this very question. With advancements in deep-learning technologies such as interactive deepfake applications, voice conversion and virtual actors, it’s possible to digitally replicate an individual’s appearance and behaviour.
This mirror image of an individual created by artificial intelligence is referred to as an “AI clone.” Our study dives into the murky waters of what these AI clones could mean for our self-perception, relationships and society. We identified three types of risks posed by AI replicas: doppelgänger-phobia, identity fragmentation and living memories.
Cloning AI
We defined AI clones as digital representations of individuals, designed to reflect some or multiple aspects of the real-world “source individual.”
Unlike fictitious characters in digital environments, these AI clones are based on existing people, potentially mimicking their visual likeness, conversational mannerisms, or behavioural patterns. The depth of replication can vary greatly, from replicating certain distinct features to creating a near-perfect digital twin.
AI clones are also interactive technologies, designed to interpret user and environmental input, conduct internal processing and produce perceptible output. And crucially, these are AI-based technologies built on personal data.
As the volume of personal data we generate continues to grow, so too does the fidelity of these AI clones in replicating our behaviour.
Fears, fragments and false memories
We presented 20 participants with eight speculative scenarios involving AI clones. The participants were diverse in ages and backgrounds, and reflected on their emotions and the potential impacts on their self-perception and relationships.
First, we found that doppelgänger-phobia was a fear not only of the AI clone itself, but also of its potential misuse. Participants worried that their digital counterparts could exploit and displace their identity.
Secondly, there was the threat of identity fragmentation. The creation of replicas threatens the unique individuality of the person being cloned, causing a disturbance to their cohesive self-perception. In other words, people worry that they might lose parts of their uniqueness and individuality in the replication process.
Lastly, participants expressed concerns about what we described as “living memories.” This relates to the danger posed when a person interacts with a clone of someone they have an existing relationship with. Participants worried that it could lead to a misrepresentation of the individual, or that they would develop an over-attachment to the clone, altering the dynamics of interpersonal relationships.
Preserving human values
It is evident that the development and deployment of AI clones wield profound implications. Our study not only contributes valuable insights to the critical dialogue on ethical AI, but it also proposes a new framework for AI clone design that prioritizes identity and authenticity.
The onus lies with all stakeholders — including designers, developers, policymakers and end-users — to navigate this uncharted territory responsibly. This involves conscientiously considering moderation and user-generated data expiration strategies to prevent misuse and over-reliance.
Further, it’s imperative to recognize that the implications of AI clone technologies on personal identity and interpersonal relationships represent just the tip of the iceberg. As we continue to tread the delicate path of this burgeoning field, our study findings can serve as a compass guiding us to prioritize ethical considerations and human values above all.
Dongwook Yoon, Assistant Professor, Computer Science, University of British Columbia
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Original Article: AI clones made from user data pose uncanny risks
More from: University of British Columbia
The Latest Updates from Bing News
Go deeper with Bing News on:
AI clones
- Fraudster clones voice via AI to dupe elderly
In yet another AI based fraud, an elderly man was duped of Rs 1 lakh after an unidentified miscreant cloned the voice of one of his friends and asked for money urgently. Police investigation underway ...
- AI voice scammers are posing as loved ones to steal your money — here’s a foolproof trick to stop attacks
Artificial intelligence’s light-speed advancement has made people easy prey for the increasingly prevalent AI voice scams. Fortunately, techsperts have revealed a foolproof way to differentiate humans ...
- Easy one word trick that instantly catches out AI ‘voice clones’ so they never raid your bank account
THE rise of AI is making it even harder to tell apart real phone calls from fakes. And falling for the fakes could be devastating, leaving your bank account high and dry. Cheap AI tools make it ...
- Simple question to ask on phone call catches AI ‘voice clones’ that pose as people you know to empty your bank
ASKING the right questions can help protect you from sinister artificial intelligence voice-cloning scams. It’s a nefarious con that uses AI to replicate the voices of your friends, family ...
- ElevenLabs Is Building an Army of Voice Clones
A tiny start-up has made some of the most convincing AI voices. Are its creators ready for the chaos they’re unleashing?
Go deeper with Bing News on:
Digital doppelgänger
- With a Little Help From Moscow and Beijing, Israel Lost the Social Media Battle
After October 7, dozens of volunteer and tech-backed initiatives organized to fill the vacuum left by the Israeli government. After six months of digital warring, it feels like a losing battle ...
- WARC Digital Commerce
Operations and distribution are key strategic levers for e-commerce fulfillment so it is important to improve physical and digital availability. While consumers may be used to paying for delivery, ...
- Virtuelle Doppelgänger
Aber es lassen sich auch Geschäftsprozesse, Fabriken, Gebäude und sogar Gemeinden und Städte als Digital Twins abbilden. «Ein zentrales Merkmal von Digital Twins ist die Fähigkeit, in Echtzeit oder ...
- Sorge vor Desinformations-Kampagne Mangelnde Transparenz: EU geht gegen Meta vor
Der Online-Riese Meta ist in den Fokus von EU-Ermittlungen geraten. Die Inhaltemoderation sei ungenügend und intransparent, so der Vorwurf. Insidern zufolge geht es vor allem um das russische Netzwerk ...
- EU-Kommission startet Verfahren gegen Facebook und Instagram
Die EU-Kommission eröffnet ein Verfahren gegen Meta, den Mutterkonzern von Facebook und Instagram. Es geht um mögliche Brüche des „Digital Services Act“, den Datenschutzregeln der EU.