IBM, July 2023
IBM researchers are developing AI text detection and attribution tools to enhance the transparency and reliability of generative AI. The swift proliferation of generative AI has sparked concerns regarding disinformation and plagiarism.
IBM is actively creating tools like RADAR, designed to detect text that has been paraphrased to mislead AI text detectors, alongside efforts aimed at countering prompt-injection attacks. Furthermore, IBM is concentrating on attribution to identify the origin of AI models, enabling users to trace their output back to prompts and data points. Transparency and reliability in AI are pivotal in managing risks and ensuring accountability.