Deepfakes Voice

In News

  • A social media platform used speech synthesis to make deep fakes of celebrities. 
    • These deep fake audios made racist, abusive, and violent comments.

About

  • Deepfakes:
    • Deep fake is a type of Artificial Intelligence (AI) used to create convincing images, audio, and video hoaxes.
    • Deepfakes use deep learning AI to replace the likeness of one person with another in video and other digital media. 
    • The most common method relies on the use of deep neural networks involving autoencoders that employ a face-swapping technique.
    • Although deepfakes could be used in positive ways, such as in art, expression, accessibility, and business, it has mainly been weaponized for malicious purposes.
    • Deepfakes can harm individuals, businesses, society, and democracy, and can accelerate the already declining trust in the media.
  • Deepfake voice:
    • Deepfake voice, also called a synthetic voice, uses AI to generate a clone of a person’s voice. The voice can accurately replicate the tone, and accents, of the target person.
    • Synthetic voices are used for business, entertainment, and other purposes to advance them, deepfake voices are usually associated with copying human voices to fool someone.
    • Creating deep fakes needs high-end computers with powerful graphics cards, leveraging cloud computing power. 
    • Deepfakes can also be used to carry out espionage activities. Doctored videos can be used to blackmail government and defence officials into divulging state secrets

Concerns about using Deepfake voice

  • Lack of Regulations: Laws pertaining to their use do not exist in many countries. Law enforcement agencies in many countries are busy establishing proper regulations for producing and using artificially synthesized voices.
  • Ethical Concerns: It can cause impersonation, identity theft and defamations. 
    • Deepfakes are widely used in the political arena to mislead voters, manipulate facts, and spread fake news. 
  • Breach of Public trust: Erosion of public trust will promote a culture of factual relativism, unraveling the increasingly strained fabric of democracy and civil society.
  • Easy Availability: Gathering clear recordings of people’s voices is getting easier and can be obtained through recorders, online interviews, and press conferences.
    • Voice capture technology is also improving, making the data fed to AI models more accurate and leading to more believable deepfake voices.

Ways to detect Deepfake voice

  • Research labs use watermarks and blockchain technologies to detect deepfake technology, but the tech designed to outsmart deepfake detectors is constantly evolving.
  • Multifactor authentication (MFA) and anti-fraud solutions can also reduce deepfake risks. 
  • Callback functions of call centres can end suspicious calls and request an outbound call to the account owner for direct confirmation.

Legislations to deal with Deepfakes

  • Currently, very few provisions under the Indian Penal Code (IPC) and the Information Technology Act, 2000 can be potentially invoked to deal with the malicious use of deepfakes.
  •  Section 500 of the IPC provides punishment for defamation.
  •  Sections 67 and 67A of the Information Technology Act punish sexually explicit material in explicit form.
  • The Representation of the People Act, 1951, includes provisions prohibiting the creation or distribution of false or misleading information about candidates or political parties during an election period.

Way Ahead

  • In India, the legal framework related to AI is insufficient to adequately address the various issues that have arisen due to AI algorithms. The Union government should introduce separate legislation regulating the nefarious use of deepfakes and the broader subject of AI.

Source: The Hindu

 
Previous article Muons
Next article Nisar Mission