Pentagon’s Digital and AI officer, Craig Martell is “scared to death” of Large Language models such as ChatGPT. While the AI model has made a buzz worldwide with more corporation working to enhance its capability. Craig believes that the disinformation spread by AI can wreak havoc across society.
Using AI as a weapon
The model doesn’t actually state any facts whenever someone asks it a question. It provides multiple sentences in response to your queries, which are more suitable for individuals such as academic writers. These concerns were also raised by writers union and other people regarding AI taking their jobs. Craig however focus on the topic that AI only presents information that it feeds from human-created sources. This means that the sources the AI pulls from can be inaccurate. Moreover, he states that some agencies can intentionally provide AI with false data to spread disinformation. Craig adds, “This information triggers our own psychology to think ‘of course this thing is authoritative.’”
During AFCEA’s TechNetCyber event in Baltimore, Martell said “My call to action to industry is: don’t just sell us the generation. Work on detection,”. He said these lines when during the event he was approached by different software vendors were selling pentagon AI tools and Platforms. Craig urges these software companies to work on a solution to make it easier for people to diffrentiate between AI generated and Human generated content. According to him, complete control of data flow.
Craig Martell also doesn’t show much confidence in the future of AI. Martell believes “no model ever survives first contact with the world.” According to him, every model trained upon old data with becomes obsolete with time. And when a model is finally ready, it is no longer up to date.
However, AI has not scared everyone in the Pentagon. Lt. Gen. Robert Skinner, while giving a speech a partially written by AI said, that its going to be challenge to implement it. If used correctly it could open many doors for the defense.