PM Modi Deepfake Video It has been described as a big challenge for the future. Because of this, a crisis can arise in a country with different cultures like India. Deepfake videos created by misuse of Artificial Intelligence (AI) can create trouble in future. While addressing the media at the Diwali meet of Bharatiya Janata Party in New Delhi today, PM Modi said that I have also seen a similar deepfake video of myself, in which I am singing and doing Garba. Recently, a deepfake video of Bollywood and South actress Rashmika Mandanna went viral, which was much talked about.
Deepfake video is a big problem
PM Modi said, “I recently saw a video in which I was seen singing Garba song. There are many other such videos online. That said, Deepfakes are used deliberately to spread misinformation or there may be a malicious intention behind their use. They may be designed to harass, intimidate, degrade and disempower people. Deepfakes can also create misinformation and confusion about important issues.
need to educate people
PM Modi said that this is a challenge for us because there is no parallel option available to verify such videos. People here will easily believe such deepfake videos going viral, which can create big problems in the future. We need to educate people about it. We need to run programs to make people understand about Artificial Intelligence (AI) and deepfakes, how it works, what can be done, what problems can it cause? I have also seen a deepfake video of me doing Garbha.
Ruckus over Rashmika Mandanna’s deepfake video
Recently, after Rashmika Mandanna’s deepfake video went viral, Union Electronics and Technology Minister Rajiv Chandrashekhar said on social media platform X (Twitter) that deepfake is new and very dangerous. This can become a major means of spreading rumours. Social media platforms need to deal with such videos. The Union Minister also said that there is a need to bring such matters within the ambit of law in the IT rules. After Rashmika’s video went viral, the Union Minister also took a follow-up on the advisory sent in February 2023 to the Cyber Law Division regarding deepfakes.