Contact
Thanks! We'll get back in touch with you soon.
Oops! Something went wrong while submitting the form.

WORLD LEADERS IN GENERATIVE AI CONTENT THAT LOOKS REAL.

CLOSE

Building a safer synthetic media landscape: what needs to be done

Thomas
Graham
May 18, 2022
Non-consensual image abuse is aimed almost entirely against women, remains the most prominent malicious uses of deepfakes and synthetic media.

It’s no exaggeration to say that synthetic media is revolutionising the creative industry. Like VFX, photo editing software, and other disruptive creative techniques, synthetic media technologies are changing what we previously thought possible when it comes to content creation.Synthetic media tools present limitless possibilities to advance creativity in a positive way. In education, deepfake technology offers the potential to bring historical figures to life in the classroom. In fashion, it’s revolutionising the shopping experience, allowing customers to virtually model the latest clothing and accessories. In the music industry, it opens up creative possibilities for artists to deliver unique and thought-provoking performances like Kendrick Lamar’s The Heart Part 5. And in the movie industry, it is used to amaze audiences while lowering VFX costs, as seen in the recent release of “The Mandalorian” Season 2 finale where the technology was used to de-age Mark Hamill’s Luke Skywalker.

The Malicious use of Deepfakes is of Immense Concern

However, it’s well known that there is potential for synthetic media and deepfakes to be used maliciously. These misuses are of immense concern. Malicious uses of synthetic media technology can range from attempts to spread political misinformation, as illustrated by the fake video that recently targeted Ukraine’s President Zelensky, to cyber crimes such as fraud through voice and facial cloning. In addition, we know that it’s also being used maliciously in the form of non-consensual image abuse. This abuse, which disproportionately affects women, is one of the most prominent malicious uses of deepfakes. As such, safeguards against these grave misuses of technology need to be addressed with urgency.As we build proprietary tools and technologies to create hyperreal synthetic content at Metaphysic, we recognise our responsibility as leaders in this young but fast growing industry to join other stakeholders and regulators in building best practice and addressing these urgent issues. So what are the key questions and measures we can address to positively shape an ethical future for hyperreal synthetic media technologies?

Open Source Software

At the core of many technological advancements is the use of open-source software. Software is open source when it has source code that anyone can inspect, modify, and enhance. Open-source software development is generally seen as positive for innovation as it fosters collaboration and allows a larger franchise of developers to experiment with programs and products.This accessibility, however, also allows bad actors to use and contribute to open-source software projects. Constant vigilance is required to develop technology in a careful and deliberate manner that prevents malicious users from creating harm. In the case of VFX and entertainment, open-source software is widely used by industry leaders to create synthetic media. From cloud computing infrastructure, to workflow management, to resource allocation for hardware and servers, open-source software and libraries are essential elements of the content creation process. No company or individual has resources to create and maintain their own versions of all software they use. While many AI algorithms used to create synthetic media are open-source, some programs, such as smartphone apps and open-source tools for generating realistic face-swaps, make the creation of synthetic media much more accessible to non-professional content creators, and thus increase the likelihood of non-consensual, harmful content.At Metaphysic, we are professional content creators, and like other industry leaders, use a number of proprietary and open-source software tools to create commercial content. We are also focused on working with regulators and industry stakeholders to guide the development and use of synthetic media in a way that is safer for everyone. To this end, our goal has always been to create our own proprietary versions of all of the software we use to create synthetic media in a way that no third party could access. This means our software will not be available for anyone to use without moderation and restriction. We currently have 15 innovators and engineers focused on this task and the team is growing by the week. Developing professional production-grade software takes time and we are dedicated to our long-term mission of harnessing the power of AI and synthetic media to create entertaining and meaningful content to delight audiences.

Building a Fair and Responsible Regulatory Framework for Deepfake Videos

Currently, the creation of convincing deepfake videos still requires advanced technical knowledge. In the future, it will likely be far less exclusive. Introducing regulation now is critical to address current and future harmful content.At both the state and federal level in the US, new laws are being introduced and tabled to criminalise the use of malicious deepfakes, particularly involving image abuse or political disinformation. In the UK, non-consensual deepfake image abuse or ‘deepfake porn’ should also fall under similar legislation to ‘revenge porn’, which was criminalised in the UK in February 2015. However, there is still a lot of work to be done. The current regulatory landscape related to deepfakes is complex and inadequate to deal with new technologies and their application by bad actors, particularly when it comes to actually bringing perpetrators to justice. This needs to change and we are committed to helping those who are making this change happen.In the EU, nascent legislation is also emerging to target the way malicious deepfake content is published and shared. The Digital Services Act Article 24b (DSA) attempts to regulate and put pressure on creators and distrubutors of deepfake image abuse, requiring platforms uploading pornographic content to ensure users could only do so once properly verified by the operator, using a registered phone number and email address, and to have human content moderators. But regulators are restricted by constitutional and practical challenges, and the European Parliament reached consensus on the DSA on April 23, 2022, with Article 24b not being passed.With a sensible governing regulatory framework, social media platforms will have further reason to moderate content and control misuse. Collaborations between legislators and platforms such as Facebook, Instagram, and TikTok, could offer a viable solution to preventing and limiting the spread of malicious content and disinformation.

Promoting Deepfake Awareness and Literacy

Ultimately, there is no single solution. A combination of legislative and technological advancements spearheaded by an informed public is essential to reducing the impact of malicious content. At Metaphysic, we are committed to raising awareness around the possibilities of positive synthetic media creation, as well as the risks posed.Our commitment to ethical synthetic media is highlighted by our initiation and sponsorship of Synthetic Futures - a community of individuals, companies, and organisations dedicated to shaping a positive future for synthetic media in its many different forms.We create content to celebrate and educate the public of synthetic media’s immense creative potential when used ethically and responsibly, and to address the technology’s malicious uses and unexpected consequences. Through driving the future of synthetic media, we can help to shape it into a technological paradigm that empowers individuals to own and safeguard their data and hyperreal identity while safely accessing the limitless potential of the hyperreal metaverse.