ST Engineering’s Einstein.AI Can Be the Catalyst for Better Regulatory Environments in Asia-Pacific

Subscribe To Download This Insight

By Benjamin Chan | 4Q 2024 | IN-7626

As the potential use cases of Generative Artificial Intelligence (Gen AI) have grown in the past few years, malicious and misused use of the technology have taken the spotlight in a series of deepfake scams and scandals across Asia-Pacific and the world. Governments should continue to explore avenues of accelerating counter Artificial Intelligence’s (AI) development and develop a robust set of governing regulations around AI’s future.

Registered users can unlock up to five pieces of premium content each month.

Log in or register to unlock this Insight.

 

Singapore's First Form of Counter AI to Deepfakes Released

NEWS


In response to an early wave of deepfake detection software demand, Singapore’s ST Engineering unveiled its Einstein.AI deepfake detection tool for enterprises at its annual InnoTech Conference in September 2024. Its primary function aims to detect inconsistencies in an uploaded video, such as odd eyebrow or lip movements and audio frequencies, that are likely to be computer-generated. ST Engineering aims to be part of the early wave of counter Artificial Intelligence (AI) tool vendors that contribute to battling rampant AI-generated misinformation and to lead the dialogue on AI's regulatory and ethical use today.

Growing Threat of Deepfake and Malicious AI

IMPACT


Deepfake technologies have surged in fraud cases worldwide in the past 2 years. Its prevalence is also widespread in the Asia-Pacific region, where they have caused a disruptive wave across enterprises and financial institutions. Deepfake technologies that replicate voices and human faces through Machine Learning (ML) capabilities have been used in various unethical and illegal schemes, such as financial fraud and spreading misinformation. Cases such as malicious actors manipulating the video of Philippines’ President Ferdinand Marcos Jr. urging military action against China amid the rising tension in claims on the South China Sea earlier in April 2024 and a finance worker at a multinational firm in Hong Kong being the central target of an elaborate deepfake fraud posing as the company’s U.K.-based chief financial officer that tricked him into paying out US$25 million in February are some of the striking cases of deliberate harm that deepfakes can cause.

In response to the surging momentum and threat of deepfake content across the Internet, there are early signs of private responses to deepfakes, such as ST Engineering’s Einstein.AI. Government support for deepfake detection solutions, such as South Korea's "offensive cyber defense" strategy, will be crucial in identifying malicious actors and enhancing deterrence through proactive cyber defense measures. This support is essential for developing more robust and systematic software to combat the increasing threat posed by the unethical use of Generative Artificial Intelligence (Gen AI). Furthermore, support from public sectors across Asia-Pacific will continuously promote a cohesive environment for development and increase the demand for such solutions. This trend is likely to encourage more enterprises to engage in development.

Stronger Government Involvement Needed to Drive Innovation in Counter AI Technology

RECOMMENDATIONS


As AI and ML technologies mature, there is a need to develop better and more comprehensive AI laws to regulate the ethical and fair use of the technology. On the national level, countries like Hong Kong, South Korea, and Singapore have instituted harsher punishments for digital crimes, such as manipulating faces and voices to spread misinformation or other abusive uses in specific cases of pornography, financial fraud, or electoral manipulation. However, other countries in Asia-Pacific, such as Indonesia and Thailand, have yet to provide a comprehensive set of governance rules and rely on ethical guidelines for its use in moderation.

Regionally, governing bodies such as the European Union (EU) and the Association of Southeast Asian Nations (ASEAN) have also been limited in their influence on AI regulations, as they only have general guidelines on the required transparency on user-generated content and its use in moderation and enforcement mechanisms. On the private level, many enterprises exploring AI have a loose set of guidelines on the ethical use and development of AI models internally, with low to no guidance from governments on policies to prevent and safeguard themselves from misuse or malicious AI.

It is imperative that local and national governments spearhead regulatory policies that enhance closer cooperation and development into counter AI software. Accelerating such programs with better support and funding, and establishing clear AI regulations will be key in helping countries overcome the threat of malicious AI usage. Additionally, strong public-private collaborations should be prioritized in terms of industry dialogue and joint investments to prepare the economy for future cybersecurity challenges and threats.

Services