Human-Like Bots Infilitrate U.S. Lawmaking Process   ◆

FiscalNote, on the use of Natural Language Generation (NLG) to create apparently fraudulent support for ending Net Neutrality:

Form letters, or comments with identical language, are neither a new development nor a foolproof indicator of fraudulence. Many form letters are submitted legitimately by humans at the prompting of a public figure or interest group, while others are submitted automatically by basic computer programs. The NLG activity unearthed by FiscalNote differs from form letters in that the resulting comments are distinct from one another, are generated by more advanced and human-like bots, and are definitive evidence of fraudulent behavior. Each of these NLG-driven comments, like human speech, is formed via a sequence of phrases. Bots generate these linguistically distinct comments by swapping out the phrases in one for different phrases with identical meaning in another.

The bots can generate “[n]early 4.5 septillion unique permutations”.

The piece’s conclusion:

But NLG technology, like artificial intelligence more broadly, is only continuing to advance and mature, as machines acquire enhanced understandings of human-generated content. The net neutrality debate thus serves as a prominent warning that, soon enough, the distinction between human- and computer-generated language may be nearly impossible to draw.

Faking comments to influence public policy represents a significant problem for a democracy.