Developing human intelligence in the time of artificial intelligence

Lotta questions, but no answers

扬
through a glass, darkly

--

(written by a human)

This last week I finally took the (non-cursory) plunge into understanding the latest developments in AI — machine learning, large language models, and of course, ChatGPT.

In the media I’ve consumed so far, some have compared ChatGPT to the (graphing) calculator, computer, internet, search, the iPhone, etc. in the seismic shifts it will lead to in how we live and work, while others downplay it as a fad technology that ultimately won’t live up to its hype or find its utility. On one end of the spectrum, we seem to be realizing science fiction — Ex Machina, Westworld, M3GAN— moment by moment. On the other end, is this just the next gold rush? Because ChatGPT is a linguistic technology (and thus inherently social), I’m far more interested and far more likely to believe in its societal and political impact (Language: The Original Interface) than say the last big thing i.e. cryptocurrency. The hot takes that would bait me include …

  • How might AI solve or exacerbate climate change
  • Anything involving Universal Basic Income (UBI)
  • But above all, how fucked are my kids.

I hate playing into stereotypes but since I’m usually trying to multi-task as both parent and tech worker, e.g. listening to a tech podcast while cleaning up the playroom, my thoughts have naturally wandered to how these developments will affect my offspring, their less well-off peers, and future generations. While I am not free from existential angst and concern for my own career, I feel more certain about how to improve my prospects in the now than how to prepare them for the economy of the future. Maybe I have a false sense (one might say hallucination) that the changes coming won’t upend civilization in my lifetime. Did the cave person understand the discovery of fire? Okay okay no more hyperbole. (Will we understand hyperbole in 10 years?)

What can or should I do about AI is a question every parent should be asking themself. And yet, I’d be surprised if any #parenting influencers or journalists have covered it. The one caveat is I have come across a bunch of media about how it affects schools today. My main critique is that they’re often focused on ChatGPT as a tool to be fought over or shared between students and teachers, instead of grappling with ChatGPT as the precursor to a radical revolution in how humans think and exist in reality. (Hyperbole or die)

These are my questions as a language enthusiast, a software designer, and a parent:

  • If large language models are to words what a calculator is to numbers, how should people learn to communicate? How should you learn how to communicate well? Will there be more emphasis on linguistic concepts and principles but not actual writing as there is for understanding math “concepts” but not producing manual calculations?
  • Will there be any value in being a good writer? Or will there be less emphasis on developing written/”asynchronous” communication skills, which can be automated and thus easier to produce, and more emphasis on developing oral/”synchronous”/face-to-face communication skills, which cannot be automated and thus, harder and more valuable?
  • How biased are large language models toward business writing? Will they further “flatten” and shorten our common language down to machinespeak? Can you be a good writer outside the guidelines enforced by the models? What incentives do you have to learn more complex syntax or more specific vocabulary?
  • What will be the highest-paying jobs and what will offer the most jobs, if any— developing, training, or applying large language models? Maybe the answer is obvious but maybe it isn’t.
  • The other day, my mother told my 2-year-old to start learning math (so far they’re counting to 5) otherwise they won’t be able to keep up with AI. This kind of nonsense begs the question, in a world where the machine will always outpace the human brain, what is the right pace and focus for the human?
  • How biased are large language models towards English, English code (is there any other kind), and Western worldviews? I assume there are Chinese companies working furiously away at their own versions. I’ve always wondered how different the world would be if the internet was invented in China. Example: to my knowledge, Chinese words don’t quite have synonyms. Would that make it easier or harder to name variables and keys?
  • If art can be generated in infinite variations at the press of a key, does that really replace the need for creative people and creative skills? Is this the death of creativity, the death of art and poetry? Or will creativity come to be less about generation and more about critique — choosing the best out of infinite possibilities? Can statistical probability replace human imagination and evoke human emotion? Will humans be driven to imagine and enact less and less statistically likely variations? Will it be more or less important to teach kids to “think different?”
  • How old was the textual material they fed into ChatGPT? How old were the books? Would studying materials outside its data set be an advantage or a waste of time? What can human history teach the model and what can it teach our kids?
  • If you have access to a learning partner that never gets tired or forgets stuff, why would you need to learn how to memorize? How will you be motivated to train your memory?
  • How will children learn the difference between true and false information? How will they be able to tell the difference? How will the information they get from AI shape their thoughts?

I welcome any resources on these questions!

(written by ChatGPT)

Write a creative personal blog post titled “developing human intelligence in the time of artificial intelligence: lotta questions, no answers”

As an AI language model, I cannot have personal experiences or opinions, but I can write a creative blog post on the given topic.

We live in a world where artificial intelligence has become an integral part of our daily lives. From smartphones to self-driving cars, AI is changing the way we interact with the world. But as AI continues to advance, it raises a lot of questions about human intelligence and its development.

One of the most pressing questions is whether AI is making us less intelligent. With so much information available at our fingertips, are we becoming too reliant on technology and losing the ability to think critically and creatively? Are we sacrificing our creativity and imagination in exchange for convenience and efficiency?

Another question is whether AI can replicate human intelligence. Can machines ever truly understand the complexity of human emotions and the nuances of language? Can they ever replicate the creativity, intuition, and empathy that are so essential to human intelligence?

Despite the advancements in AI, there are still many aspects of human intelligence that remain a mystery. For example, we still don’t fully understand how the brain works or how we form memories. While AI may be able to simulate some aspects of human intelligence, it is still far from being able to replicate it in its entirety.

So, what can we do to develop human intelligence in the time of artificial intelligence? One approach is to focus on developing skills that machines can’t replicate. These include creativity, empathy, intuition, and critical thinking. By nurturing these skills, we can ensure that we remain relevant and valuable in a world where AI is becoming increasingly prevalent.

Another approach is to embrace AI as a tool for enhancing human intelligence. For example, AI can be used to analyze large datasets and identify patterns and trends that humans may not be able to identify on their own. This can help us to make more informed decisions and develop new insights into complex problems.

In conclusion, the relationship between AI and human intelligence is complex and multifaceted. While AI has the potential to enhance human intelligence, it also raises many questions about the nature of human intelligence and its development. As we continue to navigate this evolving landscape, it is important to remain open-minded and adaptable, embracing the opportunities that AI presents while also acknowledging its limitations.

--

--