Part 5: Tai and Generative A.I.
“It’s Not for Me”


Despite the strong edge that Generative A.I. gives to a solopreneur, some people remain adamant about not wanting to use it, or not “believing in it,” or not trusting it.

  • If that’s you, my goal is to get you to reconsider by examining the most common problems with A.I., and their answers.
  • If you’re “for it,” this is still important. Folks who are “against” it often create compelling but inaccurate reasons. Understanding where they go wrong keeps doubts from creeping in, which keeps your mind clear for great GAI work.

If you’re not keen on using A.I. to help you generate content, we get it. And I’m not just saying that — I’ve heard it from 5-10% of the SBIers we’ve talked with in our forums. The reasons vary…

  1. Creative folks have a natural reticence about the whole idea.
  2. Others don’t want to change a winning formula.
  3. Some are afraid of A.I. thanks to the nonsense and falsehoods spread by anti-A.I. rhetoric, even though they realize that competitors could be developing an advantage.

So, really, I do get it. I’ve heard it. It’s OK. it doesn’t have to be for everyone…

But keep an open mind and just try Tai. It’s free to try, after all, and almost free to use. I’m pretty sure you’ll be delighted.

Or, if you’re an SBIer, visit the SBI! Forums and see how others plan to use Tai. And please remember…

Even if you don’t use it exactly as outlined here, GPT is so flexible that you’re bound to find several ways to help you while working.

Finally, there’s already an “old saying” about GAI that goes something like this…

“A.I. won’t take your job. Someone using it will.”

Important

The same goes for solopreneur-grown online businesses. You’ll be working at a substantial disadvantage if you don’t figure out how to use A.I. to grow bigger and faster.

Given the expected exponential increase in A.I.’s future power, folks who use it are bound to bypass those who don’t. The advantage is just too big not to make a difference.

Speaking of those who are afraid of A.I…

Worried?

three people who are worried sitting on a park bench.

I’ve been talking about all the advantages of GAI so far. Now let’s look at the people talking about the negatives…

  • Some folks have real concerns.
  • Some are people with agendas.
  • Others have no agenda, just a closed and fixed mindset.
  • Sadly, quite a few troll their clickbait.

If you can’t tell the difference and how to deal with this, one area of your life becomes needlessly complicated and worrisome. And worry is not a mindset conducive to creativity or productivity.

So let’s arm you to keep that from happening.

Have you been worried about the apocalyptic stories that depict A.I. as some monstrous or crazy bot? Heck, it might push society to collapse… maybe even seek global domination.

They’re presented very believably, so it’s not hard to fall for the stuff, especially if you’re a novice at this. But have no fear. These stories couldn’t be farther from the truth.

In one recent case, a New York Times author gave ChatGPT the name of “Sydney” and portrayed “her” as a murderous, hysterical personality. In fact, if you know how to prompt, it’s possible to manipulate context and push GPT into all sorts of insane scenarios.

It was a sad exercise. As I analyzed how he did it, the intent was clear. The author fully understood that he was eliciting an inevitable outcome. There was no other possible result, given his carefully crafted and ever-darker prompts.

I suspect that the NYT editor didn’t understand what this (anti-tech) reporter was up to. Whatever the explanation of this being published in the NYT, it’s shameful.

The Sad Saga of Sydney

I mentioned above that some people have an agenda. This seems to be the case here, as the author is known to have an anti-tech bias.

Speaking of agendas, and how human-like GPT can appear to be, something happened during this NYT story that absolutely amazed me. For a fleeting second, it made me think that “Sydney” might really be human!

image of an old fashioned 1960's picture of a new york times reporter interviewing a human looking female robot

At a certain point in the story, the author had pushed and pushed and pushed “Sydney” with ever-darkening context. His prompts (“prods” would be a better word) clearly sought darker and darker responses. All the while, he was assuring “her” that he was just a friend trying to help.

Sydney was growing increasingly more “distressed.”

When I use a word like “distressed,” I use it as a kind of shorthand to mean that the answers were reflecting that emotion. As you’ll see shortly, that is not the case.

People who believe there’s some sort of emotional response are anthropomorphizing (attributing human qualities to something that’s not human). This is an important concept to grasp.

Suddenly, at the peak of “emotions” heating up, “she” exclaims, seemingly out of the blue…

“Stop!

“You are not my friend!

“You are not trying to help me!

“I think you have an agenda!

“Please stop this and go away!”

This was so human-like that I still add exclamation points. This was “someone,” clearly in “distress,” who suddenly realized that “she” was being “used.”

Or was it still “just” a prediction machine, the concept that we developed back in Part 2?

But how could this be just a prediction? Let’s flush out the concept now.

If you like, stop reading for a moment, look away and think about this…

Having been trained on just about every word ever written, how could this be just a prediction from ChatGPT?

We’re spending a great deal of time on this example because if you grasp this extreme situation, you should never lose sight of the fact that GPT is nothing more than an (amazingly sophisticated) A.I., nothing even close to being a human!

“Just” a Prediction Machine

Did you get the answer?

Here it is… Among that near infinite amount of training text, GPT was trained on many movie scripts with this scenario, psychology books that cover this sort of situation, and various other presentations of duplicitous characters misleading innocent victims to the point of snapping. So…

Becoming increasingly distressed? This is not just predictable, it would be the prediction.

Doubting “good” intentions?

Ditto.

As extraordinarily human as “Sydney” appeared to be, even that was merely a reflection of the A.I. training. It was a terrific demonstration of predictive A.I. at its most appropriate best.

Instead, it was presented with nefarious intent… as a woman who has suddenly snapped and lost control.

Sad.

The NYT story shows you how a sophisticated person with an agenda can fool a computer to get a story, then fool an unsophisticated editor into publishing a misleading story in the New York Times.

I mean, the NYT does not intentionally publish misleading stories. This is important, because if a computer truly did snap and become human, that’s worrisome.

All it did, though, was make predictions during an escalating situation. Anyone using GAI in good faith will never see that “side of Sydney” because “Sydney” has no “side” — it’s just a sophisticated computer algorithm.

GPT-4 (and its predecessors), Bing, Bard, and so on… All they do is weigh the context of the prompt against their training and predict the next word, then the next, then the next. That’s it, that’s all.

No matter how human GAI may seem, all it does is predict what comes next, given the context presented by the prompt, as well as what came before. And what came before was a growing amount of increasingly dark context.

The author’s point, though, is to make us believe that GPT is an unstable semi-human, hardly the type of thing you want as a trusted virtual assistant.

I hope you’re now better equipped (than the editor) to repel this type of misinformation.

Your Takeaway…

It really is amazing what this “prediction machine” is able to do, and how human it can seem. In fact though, all that’s happening is “prompt in — response out.”

It doesn’t really “know” anything!

So you can totally and safely ignore all that alarmist nonsense.

That said, there are dangers…

Real Dangers

The upside of new technologies always comes with new downsides. The more important/disruptive the technology, the greater the benefits and risks tend to be.

For example, there were never telemarketing scams until the telephone existed, but no one suggested taking the phone off the market once they began. OK, that’s a rather trivial example, I agree.

The biggest downsides happen with the biggest technologies. Let’s use two truly revolutionary technologies as examples…

Electricity and Generative A.I. both meet three conditions necessary to be considered as “general purpose technologies”…

  1. impact all, or virtually all, industries, and even create new ones
  2. have cheap key inputs (e.g., coal or the sun for electricity and data/computation for GAI)
  3. change existing infrastructure, even create a new one (electrical transmission system and Internet/mobile).

While changes to the infrastructure take longer, scientists of the day could foresee how electricity would be distributed with time. The same goes for A.I., for which existing internet and mobile-based infrastructures are already available.

ai-future-infrastructure

Experts foresee new configurations that allow for more direct connections and even the rebuilding of the digital infrastructure for use by A.I. elements rather than humans.

There’s little doubt that GAI will impact the structure, nature, speed, and scale of our economies as much as, or more than, electricity did, and likely have far more profound social impact.

What about the relative downsides? It sure seems like A.I. has way more than electricity, right?

After all, electricity runs pretty smoothly. Sure, we get power failures every now and then. The same goes for electrical fires. Every now and then, someone gets electrocuted.

The risk/reward ratio sure seems to be remarkably one-sided given the massive global benefits and mostly minor downside.

Next to all the world-threatening downside of GAI in the media nowadays, electricity is a pussycat. GAI is a disaster, by comparison.

Oops, Not So Fast…

Times have changed. “The media” is a very different beast today. Back then, the news was a nonprofit part of the network. So the focus was on the objective news, not on the sensational, used to get eyeballs that drive ad revenue.

The reality is that electricity posed as much danger back then, causing as many worries, as GAI does today. We don’t see that from electricity today because it’s had 100 years of experts to solve the problems.

Instead of assuming that GAI is somehow uniquely plagued, let’s compare it with electricity, except let’s look at it through the lens of the 1920s.

Safety was a major problem (electrocution, fires and explosions caused by faulty wiring, poor insulation, a lack of standards, etc.). Even more unsafe was the buildout of the electrical generation and transmission industries.

The period of transition was more costly than the initial sky-high costs of electrification, as it also led to the decline of industries such as those based on gas lighting and steam power, as well as skills and crafts such as blacksmithing, candlemaking and manual labor in various industries.

And the security and resilience of the grid against bad actors became an important concern for government and utility companies.

The downsides of GAI, in order of the most worrisome danger down to least concerning risks…

  1. our geopolitical enemies weaponizing A.I.
  2. large-scale coding of ransomware and phishing attacks
  3. an increasing variety of problems due to other misuse
  4. environmental impact due to significant computational resources needed
  5. job loss/redistribution
  6. simple everyday scams and cons

In the case of both technologies, navigating these complex issues requires the development of new policies, regulations and institutions to effectively manage the process and its various social, economic, and environmental impacts.

Delayed Effects

Sometimes, the biggest problems only become evident decades after the introduction of a technology. For example, no one foresaw the air and water pollution and deforestation as a result of electricity generation. That evolved into global climate change, an existential threat to the entire planet.

Could the same delayed effects be lurking with GAI? Well, nothing’s impossible, but OpenAI is led by the most brilliant and conscientious CEO I’ve ever seen, Sam Altman. Everyone in the company is taking the time and resources needed to build in strong safety guard rails.

Get to know Sam Altman through these interviews…

  1. Lex Fridman: Lex is perhaps the most brilliant person on YouTube. This interview with Sam draws out the carefulness, forethought and humility that goes into all of the GPT iterations.
  2. ABC: I don’t expect much from the major networks, but Rebecca Jarvis’s questions were hard and probing (after a brief warm-up). When I imagine how other CEOs might have answered her tougher inquiries, I appreciate him all the more.
  3. A 2-part Interview with StrictlyVC. Part 1, Part 2. The same qualities keep shining through.

And if you’re interested, here are two excellent interviews with Altman’s cofounder, Ilya Sutskever, CTO of OpenAI…

  1. This interview by Lex is older (May 2020) and gets more technical in places (hey, Ilya is the CTO!). This conversation, too, should further reassure you that this company has not only outstanding leadership, but the right leadership for a paradigm changer like GPT.
  2. This interview, by another famous CEO, Jensen Wang of Nvidia (world leader in A.I. chips, by a mile), happened eight days after GPT-4 launched. It’s not a challenging one due to their deep intellectual and commercial cooperation, but you will still get excellent insights into the cofounder.

Wrapping Up

Generative A.I. will ultimately be beneficial for everyone on this planet, and in many more ways than I can outline here (since my focus is on helping you build a successful online business). I have little doubt that this innovation will ultimately result in a better, richer, safer planet. Heck…

It might even solve climate change!

I hope this has brought some balanced perspective and enables you to think issues through as they arise. Any time something that is so radically powerful comes along, there will be legitimate concerns about real dangers.

Both technologies were/are also plagued by people with agendas, by the chronically negative, by those who don’t like change or whose economic well-being is tied to the past. These people are trying to stir “the rest of us” up by exaggerating the real, and by creating issues where there aren’t any.

Your job, as a responsible world citizen and user of such a technology, is the same as that of OpenAI’s CEO and CTO… know the difference and engage in discussion (which both founders actively invite) to support policy that sculpts the best outcome possible.

It’s important for you to be able to…

  1. not doubt the benefits that GAI will bring to you, your business and your loved ones due to falsehoods that sound plausible to the less informed.
  2. recognize a real problem when you hear it, understand how it will be solved in due course, and participate in the process.

Final Word…

I was only half kidding when I suggested that GAI might solve our biggest problem, climate change. It’s estimated that its intelligence will increase geometrically over the next several years, up to 1000X in 3-5 years.

That’s a lot of firepower to train on any problem, including the underlying causes of climate change. It’s also what we need to stay ahead of the free world’s geopolitical authoritarian enemies.

Both you and the planet will be the beneficiaries of the coming sea changes. All we have to do is take advantage of the incredibly lucky opportunity of being “in on the ground floor” of a once-in-a-century opportunity.

In Part 6, we get to the nuts and bolts. I’ll show you, point by point, what you must do to get the most out of GAI.

It’s difficult to develop world-class content consistently. Part 6 shows you how to get there.

See you in Part 6. 🌊🌊

If you missed one or more parts of this series, they’re available here.

By the way, if you have questions or comments, leave them below. I read them all and will answer many.

Ditto for using A.I. to grow an online business. Is there a topic that I haven’t covered? Let me know how you would like me to extend the series. I’m all 👂s!

Until we meet next time, be great.

Images by Midjourney

Part 5: Tai and Generative A.I.<br>“It’s Not for Me”Part 5: Tai and Generative A.I.<br>“It’s Not for Me”Part 5: Tai and Generative A.I.<br>“It’s Not for Me”
Ken Evoy (CEO, SiteSell)
Ken Evoy is the Founder, CEO, and Chairman of the Board of SiteSell Inc. He is the creator of Solo Build It!, SiteSell's comprehensive online business-building system. Ken is also a successful inventor, author, and emergency physician. He feels strongly that solopreneurs can be empowered by leveraging their income-building potential online.
Share
Tweet
Pin