New post Need visibility? Apply for a FREE post for your Startup.  Apply Here

Article

Undress AI, Deepfakes, and the need to regulate the unethical use of Artificial Intelligence

5 Mins read

In recent years, artificial intelligence (AI) has become increasingly sophisticated. This has led to the development of new technologies that have the potential to be used for both good and bad.

One such technology is Undress AI, a website that uses AI to remove clothing from images of people. This technology has been criticized for its potential to be used for sexual purposes and extortion.

Another technology that has been used for unethical purposes is deepfakes. Deepfakes are videos or audio recordings that have been manipulated to make it look or sound like someone is saying or doing something they never did. Deepfakes have been used to spread misinformation and to damage people’s reputations.

The development of these technologies raises important questions about the ethical use of AI. How can we ensure that AI is used for good and not for harm?

One way to address this issue is to regulate the development and use of AI. Governments could create laws that prohibit the use of AI for certain purposes, such as creating deepfakes or undress AI.

Educating the public about the potential risks of AI is a point that should be taken seriously. People need to be aware of how AI can be used to manipulate and deceive them. The development of AI education framework and the ethical guidelines for the development and use of AI should be created by experts in AI, law, and ethics, this will ensure that those that are knowledgeable in the field and know the extent of AI’s powers are the ones paving the way for sanity.

Tech Whisperer founder, Jaspreet Bindra, said technology would have to evolve to the point where just as users download an anti- virus, they also have a ‘classifier’ technology that distinguishes between what is genuine or fake.

“The solution has to be two-pronged – technology and regulation. We need to have classifiers to identify what is real and what is not. Similarly, the government needs to mandate that something that is AI-generated should be clearly labelled as such. The government should look to have this clause in the Digital Personal Data Protection Bill.”

Although, most of the AI tools, especially the ones with obvious tendencies for abuse, have a notice warning their users to use the tools responsibly, many users still disregard the warning and use the tools however they please.

Ways AI can cause damages

Artificial intelligence (AI) has the potential to be a powerful force for good in the world, but it also has the potential to be used for harm. Here are some of the damages that have happened through the use of AI as well some possible damages:

  • Discrimination: AI systems can be biased, which can lead to discrimination against certain groups of people. For example, AI systems have been used to make decisions about who gets a loan or a job, and these decisions have often been biased against people of color and women.
  • Manipulation: AI can be used to manipulate people, for example by spreading misinformation or creating deepfakes. Deepfakes are videos or audio recordings that have been manipulated to make it look or sound like someone is saying or doing something they never did. Deepfakes have been used to damage people’s reputations and to spread misinformation. For example, a report says some apps access users photo gallery and uses AI to make nude photos which is used to blackmail the user.
  • Privacy violations: AI can be used to collect and analyze personal data, which can lead to privacy violations. For example, AI systems have been used to track people’s movements and to collect their personal information without their consent.
  • Job losses: AI can automate tasks that are currently done by humans, which could lead to job losses. For example, AI is being used to automate customer service jobs, which could lead to millions of job losses in the coming years.
  • Weaponization: AI could be used to develop autonomous weapons systems that could kill without human intervention. This is a major concern, as it could lead to a new arms race and to the deaths of innocent people.

In addition to the potential harms listed above, experts have warned against publicizing private information, especially what is termed “sharenting.” Sharenting is the process of parents publicizing sensitive data about their children online, especially in the aim of “creating content.” These information can be manipulated using specialized tools to threaten the children’s safety.

Children between the ages of 11-16 are particularly vulnerable. Advanced tools can easily morph or create deepfakes with these images, leading to unintended and often harmful consequences. Once these manipulated images find their way to various sites, removing them can be an arduous and sometimes impossible task,” Policy expert, Kanishk Gaur, said.

These are just some of the damages that are possible through the use of AI. It is important to be aware of these risks and to take steps to mitigate them. We need to develop AI systems that are fair, transparent, and accountable. We also need to put in place safeguards to prevent AI from being used for harm.

In addition to regulation, there are a number of other things that can be done to mitigate the risks of unethical AI. These include:

  • Transparency: It is important to be transparent about how AI is being used. This means providing clear information about the purpose of the AI, the data it is using, and the potential risks.
  • Accountability: There needs to be accountability for the use of AI. This means holding individuals and organizations responsible for the consequences of their actions.
  • Diversity: The development and use of AI should be inclusive. This means involving a diversity of people in the process, from the design stage to the deployment stage.
  • Education: People need to be educated about AI. The public needs ro be up-to-date about the potential risks and benefits of AI, and how to use it safely and ethically.

Countries and organisations that have made moves to regulate AI

The move to regulate the unethical use of artificial intelligence has been taken by a number of governmental bodies and private organizations. Some of them have finalized their moves and some are still in the pipeline.

In 2021, the Parliament of the European Union adopted the Artificial Intelligence Act, which is a comprehensive regulation that sets out rules for the development and use of AI in a variety of sectors. The Act prohibits the use of AI for certain purposes, such as social scoring and mass surveillance, and it requires companies to comply with a number of ethical and transparency requirements.

In North America, country like the United States is yet to enact a comprehensive AI regulation, but some moves have been made at the federal level. In 2021, the White House released an executive order on promoting responsible AI, which calls for the development of ethical guidelines and standards for AI development and use. The order also directs federal agencies to develop plans to address the risks of AI, such as bias and discrimination.

The United Kingdom is in the process of developing an AI strategy, which is expected to include proposals for regulation. The government has said that it wants to ensure that AI is used for good and not for harm, and that it wants to protect people from the risks of AI, such as bias and discrimination.

The Organization for Economic Cooperation and Development (OECD) has developed a set of ethical guidelines for AI. The guidelines are intended to help governments, businesses, and individuals develop and use AI in a way that is safe, ethical, and beneficial to society.

It is likely that we will see more regulation in the coming years, as governments and organizations grapple with the challenges and opportunities of AI. While waiting for government and organisations to make moves to curb the unethical use of artificial intelligence, it is wise for internet users to take precautions when using new technology, especially one as powerful as artificial intelligence.

Don’t miss any tech news ever!

We don’t spam! Read our privacy policy for more info.

882 posts

About author
When I'm not reading about tech, I'm writing about it, or thinking about the next weird food combinations to try. I do all these with my headphones plugged in, and a sticky note on my computer with the words: "The galaxy needs saving, Star Lord."
Articles
Related posts
Article

DeFi vs. Banks: Why Traditional Finance Can’t Compete

3 Mins read
The rise of decentralized finance (DeFi) has sparked a revolution in the world of financial services. Built on blockchain technology, DeFi offers…
Article

Diamond Hands & Paper Hands: Who Really Wins in Crypto?

2 Mins read
The world of cryptocurrency has given birth to its own set of memes, terms, and ideologies. Among the most popular are “diamond…
Article

Crypto’s Dark Side: Pumps, Dumps, and Rug Pulls

2 Mins read
Cryptocurrency offers a vision of financial freedom and decentralization, but not all that glitters is gold. Beneath the allure of massive returns…
Newsletter Subscription

🤞 Don’t miss any update!

We don’t spam! Read more in our privacy policy

Join our Telegram channel here - t.me/TechpadiAfrica

Leave a Reply

×
Article

What you need to know about UX Design