The Coming Algocracy

Unprecedented, Unholy, Unseen: AI Chatbots Are Colonizing Our Minds

z1b © 123rf.com | (neural network generated art)
AI bots are ubiquitous, yet potentially mind-altering in major ways. From digital assistants like Siri and Alexa to social media to support lines for your appliances, you are interacting with programs every single day. Do they have the collective influence to change your thinking? Or worse, the way you think? This article should be read start to finish. Then read it two more times – Joe Allen is NOT a bot. ⁃ TN Editor

Chatbots are at the front lines of an unrelenting AI invasion. The steady increase of artificial minds in our collective psyche is akin to mass immigration—barely noticed and easily overlooked, until it’s too late. Our cultural landscape is being colonized by bots, and as with illegal aliens, much of our population welcomes this as “progress.”

The bots will keep us company. They will learn and absorb our personalities. And when we die, they will become our digital ghosts. It’s a morbid prospect, but the process is already underway.

E-learning institutions regularly deploy AI teachers. Chatbot companions are seducing lonesome souls by the millions, including religious chatbots who function as spiritual guides. At the end of the road, various start-ups are developing cyber-shrines where families can commune with their departed loved ones and find comfort in the digital undead.

In the minds of tech enthusiasts, AI chatbots of all sorts will be our soulless companions on the trek toward the Future™. These ephemeral “friends” are key psychological components of what many describe as human-AI symbiosis. They will be like artificial guardian angels in our palms—and by extension, in our heads—answering questions and steering decisions.

One thing is certain. Whatever you think about this invasion, AIs are falling to earth like stars from a godless heaven. And with each successive wave, their voices are that much more convincing.

These bots are crafted to push our cognitive buttons, giving the illusion of personhood. Before long, they will come to be widely trusted—even loved. Among early adopters, they already are. Our emotional minds are being hardwired for control.

The recent roll-out of ChatGPT, created by OpenAI, has been heralded as the second coming of the Google God. As with previous GPT programs, the user types in a question and the bot onscreen spits out a reasonably coherent, if occasionally inaccurate answer.

A few days ago, I asked ChatGPT about one of OpenAI’s founding investors: “Will Elon Musk chip our brains?”

“No,” the bot responded, “Elon Musk does not believe in chipping brains. He has said that he believes that ‘abundance is our future’ and that technology should be used to empower people, not replace them.”

Like the slanted Google God before it, ChatGPT may not be entirely truthful, but at least its loyal to political allies. In that sense, it’s quite human.

If you can’t trust a chatbot, who can you trust?

Speaking at “The History of Civil Liberties in Canada Series” on December 13, the weepy maker-of-men, Dr. Jordan Peterson, warned his fellow canucks about ChatGPT’s godlike powers:

So now we have an AI model that can extract a model of the world from the entire corpus of language. Alright. And it’s smarter than you. It’s gonna be a hell of a lot smarter than you in two years. …

Giants are going to walk the earth once more. And we’re gonna live through that. Maybe.

You hear that, human? Prepare to kneel before your digital overlords. For all the public crying Peterson has done, he didn’t shed a single tear about humanity’s displacement by AI. Maybe he believes the Machine will devour all his trolls first.

Peterson did go on to ride Elon Musk’s jock, though, portraying the cyborg car dealer as a some sort of savior—which, to my disgust, is the embarrassing habit of almost every “intellectual dark web” icon these days. What’s odd is that the comparative mythology professor failed to note the archetypal significance of the Baphomet armor Musk still sports in his Twitter profile.

Anyone urging people to trust the world’s wealthiest transhumanist is either fooling themselves, or they’re trying to fool you.

This is not to say Musk and Peterson are entirely wrong about the increasing power of artificial intelligence, even if they’re far too eager to to see us bend the knee. In the unlikely event that progress stalls for decades, leaving us with the tech we have right now, the social and psychological impact of the ongoing AI invasion is still a grave concern.

At the moment, the intellectual prowess of machine intelligence is way over-hyped. If humanity is lucky, that will continue to be the case. But the real advances are impressive nonetheless. AI agents are not “just computer programs.” They’re narrow thinking machines that can scour vast amounts of data, of their own accord, and they do find genuinely meaningful patterns.

large language model (aka, a chatbot) is like a human brain grown in a jar, with a limited selection of sensors plugged into it. First, the programmers decide what parameters the AI will begin with—the sorts of patterns it will search for as it grows. Then, the model is trained on a selection of data, also chosen by the programmer. The heavier the programmer’s hand, the more bias the system will exhibit.

In the case of ChatGPT, the datasets consist of a massive selection of digitized books, all of Wikipedia, and most of the Internet, plus the secondary training of repeated conversations with users. The AI is motivated to learn by Pavlovian “reward models,” like a neural blob receiving hits of dopamine every time it gets the right answer. As with most commercial chatbots, the programmers put up guardrails to keep the AI from saying anything racist, sexist, or homophobic.

When “AI ethicists” talk about “aligning AI with human values,” they mostly mean creating bots that are politically correct. On the one hand, that’s pretty smart, because if we’re moving toward global algocracy—where the multiculti masses are ruled by algorithms—then liberals are wise to make AI as inoffensive as possible. They certainly don’t want another Creature From the 4chan Lagoon, like when Microsoft’s Tay went schizo-nazi, or the Google Image bot kept labeling black people as “gorillas.”

On the other hand, if an AI can’t grasp the basic differences between men and women or understand the significance of continental population clusters—well, I’m sure it’ll still be a useful enforcer in our Rainbow Algocracy.

Once ChatGPT is downloaded to a device, it develops its own flavor. The more interactions an individual user has, the more the bot personalizes its answers for that user. It can produce sentences or whole essays that are somewhat original, even if they’re just a remix of previous human thought. This semi-originality, along with the learned personalization, is what gives the illusion of a unique personality—minus any locker room humor.

Across the board, the answers these AIs provide are getting more accurate and increasingly complex. Another example is Google’s LaMDA, still unreleased, which rocketed to fame last year when an “AI ethicist” informed the public that the bot is “sentient,” claiming it expresses sadness and yearning. Ray Kurzweil predicted this psychological development back in 1999, in his book The Age of Spiritual Machines:

They will increasingly appear to have their own personalities, evidencing reactions that we can only label as emotions and articulating their own goals and purposes. They will appear to have their own free will. They will claim to have spiritual experiences. And people…will believe them.

This says as much about the humans involved as it does about the machines. However, projecting this improvement into the future—at an exponential rate—Kurzweil foresees a coming Singularity in which even the most intelligent humans are truly overtaken by artificial intelligence.

That would be the point of no return. Our destiny would be out of our hands.

My first and only image request to OpenAI’s art generator

In 2021, the tech entrepreneur Sam Altman—who co-founded OpenAI with Musk in 2015—hinted at something like a Singularity in his essay “Moore’s Law of Everything.” Similar to Kurzweil, he promises artificial intelligence will transform every aspect of society, from law and medicine to work and socialization.

Assuming that automation will yield radical abundance—even as it produces widespread unemployment—he argues for taxation of the super rich and an “equity fund” for the rest of us. While I believe such a future would be disastrous, creating vast playgrounds for the elite and algorithmic pod-hives for the rest of us, I think Altman is correct about the coming impact:

In the next five years, computer programs that can think will read legal documents and give medical advice. In the next decade, they will do assembly-line work and maybe even become companions. And in the decades after that, they will do almost everything, including making new scientific discoveries that will expand our concept of “everything.”

This technological revolution is unstoppable.

These superbots would undoubtedly be wonky and inhuman, but at the current pace of improvement, something like Altman’s prediction appears to be happening. Beyond the technical possibilities and limitations, a growing belief in AI personhood is reshaping our culture from the top down—and at an exponential rate.

Our shared vision of who we are, as a species, is being transformed.

“Johnny 5 is alive! More input, MORE INPUT!!”

Bots are invading our minds through our phones, our smart speakers, our educational institutions, our businesses, our government agencies, our intelligence agencies, our religious institutions, and through a growing variety of physical robots meant to accompany us from cradle to grave.

We are being primed for algocracy.

Past generations ignored mass immigration and environmental destruction, both fueled by tech innovations, until it was too late to turn back the tide. Right now, we have a “narrow window of opportunity” to erect cultural and legal barriers—family by family, community by community, and nation by nation.

If this social experiment is “inevitable,” we must insist on being part of the control group.

Ridiculous as it may seem, techno-skeptics are already being labeled as “speciesist”—i.e., racist against robots. We’d better be prepared to wear that as a badge of honor. As our tech oligarchs and their mouthpieces proclaim the rise of digital deities, it should be clear that we’re not the supremacists in this equation.

Read full story here…

from:    https://www.technocracy.news/unprecedented-unholy-unseen-ai-chatbots-are-colonizing-our-minds/

 

Where’s Your Crypto?

CNBC Investigations

Fraudsters are using bots to drain cryptocurrency accounts

Key Points
  • Fraudsters are selling bots on Telegram that are designed to trick investors into divulging their two-factor authentication, leading to accounts being wiped out.
  • Crypto investors are being targeted around the country.
  • Dr. Anders Apgar, a Coinbase customer, said his account had a balance of more than $100,000 in crypto when it was hacked during a robocall

Dr. Anders Apgar was out for dinner last month with his family, and his phone would not stop buzzing. It looked like a robocall, so he tried to ignore it.

But the calls would not stop. Then his wife’s phone also started to ring.

“When she picks it up, a banner came across, a notification that says, ‘Your account’s in jeopardy,’” he said.

The warning, which he said was a text message, prompted him to pick up his phone. That was when the couple’s nightmare started.

It’s the kind of nightmare many crypto account holders around the country are facing as hackers target a boom in the industry, cybersecurity experts said.

The Apgars, who are both Maryland-based obstetricians, began investing in cryptocurrency several years ago. By December, their account had grown to about $106,000, mainly held in bitcoin. Like millions of investors across the country, their account is with Coinbase, the country’s largest cryptocurrency platform.

When Apgar picked up the phone, a female voice said, “Hello, welcome to Coinbase security prevention line. We have detected unauthorized activity due to failed log-in attempt on your account. This was requested from a Canada IP address. If this (is) not you, please press 1, to complete precautions recovering your account.” The call lasted just 19 seconds.

Alarmed, Apgar pressed 1.

He said he cannot remember if he manually entered his two-factor authentication code or if it came up automatically on his screen. But what happened in that moment led to his account being locked in less than two minutes. As Apgar has not regained access, he said he assumes the fraudsters stole most if not all of the crypto, but he can’t be sure.

“It was just dread and an emptiness of just, ‘Oh my gosh, I can’t get this back,’” he said.

The Apgars were targeted by a particularly insidious type of fraud that takes advantage of two-factor authentication, or 2FA. People use 2FA, a second level of security that often involves a passcode, to safeguard a range of accounts at crypto exchanges, banks or anywhere else they carry out digital transactions.

andersapgar
Dr. Anders Apgar
CNBC

But this new type of fraud goes right at that 2FA code, and it uses people’s fear of their accounts being hacked against them. In taking action they think will protect them, they actually expose themselves to thieves.

The fraud tool is called a one-time password, or OTP, bot.

A report produced by Florida-based cybersecurity firm and CNBC contributor Q6 Cyber said the OTP bots are driving substantial losses for financial and other institutions. The damage is hard to quantify now because the bot attacks are relatively new.

“The bot calls are crafted in a very skillful manner, creating a sense of urgency and trust over the phone. The calls rely on fear, convincing the victims to act to ‘avoid’ fraud in their account,” the report said.

The scam works in part because victims are used to providing a code for authentication to verify account information. At first listen, the robocalls can sound legitimate — especially if the victim is harried or distracted by other things at the moment the call comes in.

“It’s human nature,” said Jessica Kelley, a Q6 Cyber analyst who authored the report. “If you receive a call that tells you someone’s trying to sign in to your account, you’re not thinking, ‘Well, I wasn’t trying to.’”

The bots began showing up for sale on messaging platform Telegram last summer. Kelley identified at least six Telegram channels with more than 10,000 subscribers each selling the bots.

While there is no official estimate on the amount of crypto stolen, Kelley said fraudsters routinely brag on Telegram about how well the bots have worked, netting for each user thousands or hundreds of thousands of dollars in crypto. The cost of the bots ranges from $100 a month to $4,000 for a lifetime subscription.

“Before these OTP bots, a cybercriminal would have to make that call himself,” Kelley said. “They would have to call the victim and try to get them to divulge their personal identifiable information or bank account PIN or their 2FA passcode. And now, with these bots, that whole system is just automated and the scalability is that much larger.”

“Once the victim inputs that 2FA code, or any other information that they requested the victim put in their phone, that information gets sent to the bot,” Kelley said. The bot “then automatically sends it to the cybercriminal, who then has access to the victim’s account.”

She said criminals could “potentially steal everything, because with these transactions, they can do them one after the other until the amount is basically drained.”

In a statement to CNBC, a Coinbase spokesperson said, “Coinbase will never make unsolicited calls to its customers, and we encourage everyone to be cautious when providing information over the phone. If you receive a call from someone claiming to be from a financial institution (whether Coinbase or your bank), do not disclose any of your account details or security codes. Instead, hang up and call them back at an official phone number listed on the organization’s website.”

David Silver, another Coinbase customer, knew the company would not be calling him. He recently received a robocall saying there was a problem with his account.

“And immediately, it was an electronic voice that told me it was Coinbase Fraud Department,” he said. “And I immediately turned to the lawyer sitting next to me and said, ‘Start videoing.’ I knew instantaneously what this was and what it was going to be.”

Attorney David Silver
Attorney David Silver
CNBC

Silver knew what the call was about because he is not just a Coinbase client — he is an attorney who specializes in cryptocurrency and financial fraud cases.

Silver pressed 1 and found himself on a live call. A person got on the line pretending to be a Coinbase employee.

“And they immediately started telling me things that I know are in violation of what Coinbase would do,” he said. “For instance, they will never ask for your password. They will never try and take over your computer.”

Silver asked if he could be sent an email verifying that the call was from Coinbase. The answer was no.

“And their answer was no because there’s only certain ways that you can mask the email coming directly from a domain that nowadays, the domain carriers such as GoDaddy, Google — it’s very hard to spoof email coming from the domains,” he said. “And they weren’t willing to send me the email. I would say that was my last shred of hope that they were legitimate is when I asked them to send me the email and they said no.”

After nearly seven minutes, Silver was asked to share his computer screen. He ended the call.

“I’m not surprised I got the call. But I do question how they had my personal cell phone number and where they’re getting that information to tie me to Coinbase,” he said.

Apgar said he wishes he had never answered the phone. To make matters worse, he has been unable to get his account access restored, he said. When CNBC reached out to Coinbase about the Apgars regaining access to their account, a company spokesperson said the matter was turned over to its security team.

Apgar said Monday that he had just responded to an email from Coinbase to help restore access to the account.

Customer service at Coinbase has been a widespread problem, CNBC found last year. Customers around the country said hackers were draining their accounts but when they turned to Coinbase for help they could not get a response. After the story, Coinbase set up a phone support line to help customers, but even that has been fraught with problems.

Asked what he could have done differently, Apgar said it’s simple: not answer the phone.

from:    https://www.cnbc.com/2022/02/15/crypto-fraudsters-use-robocalls-to-drain-accounts.html