top of page
< Back

The Perils of Generative AI: Implications for Open Source Intelligence Research

12 Aug 2023

Comprehensive Talk

The Perils of Generative AI: Implications for Open Source Intelligence Research

Andy Dennis

Abstract

Abstract:


The rapid advancement and proliferation of Generative AI in social media and other digital platforms have sparked significant discussion about their potential impact on various sectors, including Open Source Intelligence (OSINT) research. OSINT, a critical resource in security, intelligence, and research fields, heavily relies on social media and other platforms to gather and analyze publicly available data. With the recent proliferation of Large Language Models (LLMs) and their interaction with these platforms, concerns have emerged about their potential to hinder the efficacy and integrity of OSINT.


This talk will first provide a background on Generative AI and OSINT, explaining the capabilities of LLMs and the importance of OSINT in various fields. It will then delve into how LLMs are/can be used with social media and other platforms, and their potential influence on OSINT research.


We will discuss several challenges posed by LLMs to OSINT. These include issues of data validity and reliability, as the difficulty in distinguishing between human generated and Generative AI generated content can lead to skewed or false data. The potential for information pollution and spread of misinformation is another significant concern, especially given the capacity of LLMs to generate large volumes of persuasive and contextually relevant content. Moreover, problems related to source attribution and provenance may arise, adding a layer of complexity to the analysis of open source data. Lastly, the potential of Generative AI for AI driven influence operations could distort the information landscape, posing further challenges to OSINT.


Possible solutions and mitigation strategies will be proposed, which include enhancing data validation and verification techniques, improving AI literacy among OSINT researchers, advocating for more transparency around Generative AI usage in social media, and employing AI tools to detect and flag AI generated content.


The future of LLMs and other forms of Generative AI and their potential impact on OSINT will be discussed, with a focus on emerging trends and technologies. Suggestions for further research and study on this issue will be provided, highlighting the urgent need for continued vigilance and proactive measures in the face of rapidly evolving LLM capabilities.


In conclusion, the talk will underscore the importance of this issue for the OSINT community, emphasizing the need for ongoing research and adaptive strategies to navigate the challenges posed by the increasing use of Generative AI in social media and other platforms. The session will close with Q&A, offering an opportunity for further discussion by the Recon village audience.




I. Introduction

Introduction to me the speaker

A. Brief explanation of Generative AI and open source intelligence (OSINT)

Overview of the proliferation of Generative in social media and other platforms

Statement of the problem: the potential perils of Generative AI for OSINT research


II. Background

Development and capabilities of Generative AI

Pace of improvements between 2022 and 2023

Explanation of OSINT and its importance in security, intelligence, and research fields

The role of social media and other platforms in OSINT



III. The Intersection of Generative AI and OSINT

Explanation of how Generative AI are used in social media and other platforms

Proliferation of bots

Fake images of individuals e.g. LinkedIn

Discussion on the potential of AI to enhance or hinder OSINT

Election interference: hexad categories of attack vectors.

Can we use LLM to find fake accounts?

Large number of fake images interfering with missing persons tracking e.g. TraceLabs style CTF

Real-world example of Generative AI usage affecting OSINT

Linkedin a proliferation of fake profiles: NPR: “That smiling LinkedIn profile face might be a computer-generated fake” https://www.npr.org/2022/03/27/1088140809/fake-linkedin-profiles



IV. The Perils of Generative AI for OSINT

Issues of data validity and reliability with LLM generated text content

The danger of information pollution and misinformation

Election 2024

Fake news articles

Fake photos - Trump arrest photo

The problem of source attribution and provenance

Challenges in discerning human versus Generative AI created content

Images

Text

Sound

Potential for AI driven influence operations


V. Possible Solutions and Mitigation Strategies

Developing more robust data validation and verification techniques

Enhancing AI literacy among OSINT researchers

Advocating for transparency and regulations for Generative AI usage in social media

Could this even work?

Employing AI tools to detect and flag AI generated content


VI. Future Outlook

Discussion on the potential future developments of Generative AI

Predicting the impact on OSINT, considering emerging trends and technologies

Open source libraries already appearing e.g.: https://github.com/sshh12/llm_osint

Suggestions for further research and study on this subject


VII. Conclusion

Recap of the main points of the talk

Restatement of the importance of this issue for the OSINT community

Final thoughts on the need for vigilance, research, and proactive measures to address this challenge


VIII. Q&A Session

Open the floor for questions and further discussion on the topic


bottom of page