Collaborative Research: SaTC: TTP: Small: “Deepfakes, videos that are generated or manipulated by artificial intelligence pose a major threat for spreading disinformation, threatening blackmail, and new forms of phishing.”
The objective of this transition-to-practice project is to “develop the DeFake tool, a system that utilizes advanced machine learning to help journalists detect deepfakes in a way that is robust, intuitive, and provides results that are explainable to the general public.
To meet this objective, the project team is engaged in four main tasks:
(1) Making the tool robust to new types of deepfakes, and having it show users why a video is fake;
(2) Protecting the tool from adversarial examples, e.g., small perturbations to a video that are specially crafted to fool detection systems;
(3) Working with journalists to understand what they need from the tool, and building an online community to discuss deepfakes and their detection; and
(4) Integrating advances from the other tasks into a stable, efficient, and useful tool, and actively disseminating this tool to journalists.
The project team is also leveraging visually interesting deepfakes to develop engaging education and outreach efforts, such as a museum-style exhibit on deepfake detection meant for broad audiences of all ages.”
