Forums before death by AOL, social media and spammers... "We can't have nice things"
|    comp.misc    |    General topics about computers not cover    |    21,759 messages    |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
|    Message 20,090 of 21,759    |
|    Pierre Delecto Romney to D. Ray    |
|    Re: THE PENTAGON WANTS TO USE AI TO CREA    |
|    18 Oct 24 15:18:35    |
      XPost: alt.fan.rush-limbaugh, talk.politics.misc, alt.censorship       XPost: comp.ai.philosophy       From: robberbaron@invalid.ut              D. Ray wrote:       > THE UNITED STATES’ secretive Special Operations Command is looking for       > companies to help create deepfake internet users so convincing that neither       > humans nor computers will be able to detect they are fake, according to a       > procurement document reviewed by The Intercept.       >       > The plan, mentioned in a new 76-page wish list by the Department of       > Defense’s Joint Special Operations Command, or JSOC, outlines advanced       > technologies desired for country’s most elite, clandestine military       > efforts. “Special Operations Forces (SOF) are interested in technologies       > that can generate convincing online personas for use on social media       > platforms, social networking sites, and other online content,” the entry       > reads.       >       > The document specifies that JSOC wants the ability to create online user       > profiles that “appear to be a unique individual that is recognizable as       > human but does not exist in the real world,” with each featuring       “multiple       > expressions” and “Government Identification quality photos.”       >       > In addition to still images of faked people, the document notes that “the       > solution should include facial & background imagery, facial & background       > video, and audio layers,” and JSOC hopes to be able to generate “selfie       > video” from these fabricated humans. These videos will feature more than       > fake people: Each deepfake selfie will come with a matching faked       > background, “to create a virtual environment undetectable by social media       > algorithms.”       >       > The Pentagon has already been caught using phony social media users to       > further its interests in recent years. In 2022, Meta and Twitter removed a       > propaganda network using faked accounts operated by U.S. Central Command,       > including some with profile pictures generated with methods similar to       > those outlined by JSOC. A 2024 Reuters investigation revealed a Special       > Operations Command campaign using fake social media users aimed at       > undermining foreign confidence in China’s Covid vaccine.       >       > Last year, Special Operations Command, or SOCOM, expressed interest in       > using video “deepfakes,” a general term for synthesized audiovisual data       > meant to be indistinguishable from a genuine recording, for “influence       > operations, digital deception, communication disruption, and disinformation       > campaigns.” Such imagery is generated using a variety of machine learning       > techniques, generally using software that has been “trained” to recognize       > and recreate human features by analyzing a massive database of faces and       > bodies. This year’s SOCOM wish list specifies an interest in software       > similar to StyleGAN, a tool released by Nvidia in 2019 that powered the       > globally popular website “This Person Does Not Exist.” Within a year of       > StyleGAN’s launch, Facebook said it had taken down a network of accounts       > that used the technology to create false profile pictures. Since then,       > academic and private sector researchers have been engaged in a race between       > new ways to create undetectable deepfakes, and new ways to detect them.       > Many government services now require so-called liveness detection to thwart       > deepfaked identity photos, asking human applicants to upload a selfie video       > to demonstrate they are a real person — an obstacle that SOCOM may be       > interested in thwarting.       >       > The listing notes that special operations troops “will use this capability       > to gather information from public online forums,” with no further       > explanation of how these artificial internet users will be used.       >       > This more detailed procurement listing shows that the United States pursues       > the exact same technologies and techniques it condemns in the hands of       > geopolitical foes. National security officials have long described the       > state-backed use of deepfakes as an urgent threat — that is, if they are       > being done by another country.       >       > Last September, a joint statement by the NSA, FBI, and CISA warned       > “synthetic media, such as deepfakes, present a growing challenge for all       > users of modern technology and communications.” It described the global       > proliferation of deepfake technology as a “top risk” for 2023. In a       > background briefing to reporters this year, U.S. intelligence officials       > cautioned that the ability of foreign adversaries to disseminate       > “AI-generated content” without being detected — exactly the capability       the       > Pentagon now seeks — represents a “malign influence accelerant” from       the       > likes of Russia, China, and Iran. Earlier this year, the Pentagon’s Defense       > Innovation Unit sought private sector help in combating deepfakes with an       > air of alarm: “This technology is increasingly common and credible, posing       > a significant threat to the Department of Defense, especially as U.S.       > adversaries use deepfakes for deception, fraud, disinformation, and other       > malicious activities.” An April paper by the U.S. Army’s Strategic       Studies       > Institute was similarly concerned: “Experts expect the malicious use of AI,       > including the creation of deepfake videos to sow disinformation to polarize       > societies and deepen grievances, to grow over the next decade.”       >       > The offensive use of this technology by the U.S. would, naturally, spur its       > proliferation and normalize it as a tool for all governments. “What’s       > notable about this technology is that it is purely of a deceptive nature,”       > said Heidy Khlaaf, chief AI scientist at the AI Now Institute. “There are       > no legitimate use cases besides deception, and it is concerning to see the       > U.S. military lean into a use of a technology they have themselves warned       > against. This will only embolden other militaries or adversaries to do the       > same, leading to a society where it is increasingly difficult to ascertain       > truth from fiction and muddling the geopolitical sphere.”       >       > Both Russia and China have been caught using deepfaked video and user       > avatars in their online propaganda efforts, prompting the State Department       > to announce an international “Framework to Counter Foreign State       > Information Manipulation” in January. “Foreign information manipulation       and       > interference is a national security threat to the United States as well as       > to its allies and partners,” a State Department press release said.       > “Authoritarian governments use information manipulation to shred the fabric              [continued in next message]              --- SoupGate-Win32 v1.05        * Origin: you cannot sedate... all the things you hate (1:229/2)    |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
(c) 1994, bbs@darkrealms.ca