Search
By Alma Angotti
Recent months have shown an explosive amount of development of, interest in, and publicity around generative Artificial Intelligence (AI) tools (such as OpenAI ChatGPT, Google Bard, etc.). While such tools have wide-ranging applicability and show promising early results for increasing efficiency and performance across a variety of industries, companies and individuals should both be aware that all such publicly available tools capture and store any information submitted to them. In fact, interest in and use of these tools has increased so dramatically that the Congressional Research Service recently published a primer on generative AI1, which includes not only an overview of the tools and the current landscape, but makes specific mention of data privacy concerns. Similarly, a joint research paper published earlier2 this year by Stanford Internet Observatory, Georgetown University’s Center for Security and Emerging Technology, and OpenAI explored potential uses of generative AI by malicious actors.
While OpenAI, Microsoft, Google, and others are slowly making fully private usage of these tools available to customers, use of public tools (even those with subscription models) at this time should be approached with caution. Several instances of sensitive data leaks have already been reported in the past few months, most notably by Samsung3, and OpenAI itself also confirmed a data leak4. Though these concerns are being or have been addressed, it is strongly recommended that users do not submit any personal, proprietary, confidential, or client-sensitive information to any such public tools. Users should also be aware that these tools have very few baseline controls to prevent returning incorrect or fabricated information, such as citing fictitious cases for a court filing5.
Additionally, usage of generative AI and other advanced AI tools are already a significant area of focus among bad actors, and new/evolving fraud schemes and approaches are already appearing in the market. There is already evidence of how “traditional” fraud approaches, such as email/text phishing and spoofing, can be enhanced with generative AI6 to appear more legitimate. Similarly, evolving versions of fraud and scam techniques are appearing, such as AI-enhanced voice cloning to imitate friends or family, and deepfake videos to facilitate identity theft (a recent FTC business guidance article7 discusses similar issues). Though such techniques are not novel, as seen in this cybercrime case from 20198, these types of fraud will likely become more widespread as generative AI tools are democratized for use globally. Companies across all industries should stay abreast of these developments, look to identify key areas of exposure to such threats, and consider investing in additional or improved fraud prevention measures.
For companies looking to identify current or potential risks arising from generative AI or other AI/ machine learning tools, or for those seeking independent, expert implementation guidance, tuning, and/or validation of any AI/ machine learning solutions, Guidehouse provides premier advisory, risk assessment, and solution development services for commercial and government clients in the financial services, healthcare, energy, and defense sectors.
Guidehouse is a global consultancy providing advisory, digital, and managed services to the commercial and public sectors. Purpose-built to serve the national security, financial services, healthcare, energy, and infrastructure industries, the firm collaborates with leaders to outwit complexity and achieve transformational changes that meaningfully shape the future.