Speaking Fluently When Thinking Fluently :: InvestMacro


Written by Kyle MawaldAnd the The University of Texas at Austin College of Liberal Arts And the Anna A. IvanovaAnd the Massachusetts Institute of Technology (MIT)

When you read a sentence like this, your past experience tells you that it is written with the thought and feeling of being human. And in this case, there is already a human being who writes these words: [Hi, there!] But these days, some sentences that look remarkably human-like are created by artificial intelligence systems trained on massive amounts of human text.

People are used to assuming that language fluency comes from thinking, and a human sense that evidence to the contrary can be difficult to understand. How are people likely to get around in this relatively unknown region? Because of the persistent tendency to associate fluent expression with fluent thought, it is natural—but potentially misleading—to believe that if an AI model can express itself fluently, it means that it thinks and feels just as humans do.

Thus, it is perhaps not surprising that a former Google engineer recently claimed that Google’s LaMDA AI system has a sense of self because it can create an eloquent script about his alleged feelings. This event and Subsequent media coverage led to a number Really skeptical Articles And the Posts About the claim that the computational models of human language are conscious, meaning that they are able to think, feel, and experience.

The question of what it means for an AI model to be conscious is complex (See, for example, the opinion of our colleague), and our goal here is not to settle the matter. But as language researcherswe can use our work in cognitive science and linguistics to explain why it is so easy for humans to fall into the cognitive trap of thinking that an entity that can use language fluently is conscious, conscious, or intelligent.

Using artificial intelligence to generate human-like language

Text generated by forms like Google’s LaMDA can be difficult to distinguish from text written by humans. This remarkable achievement is the result of a decades-long program of building models that generate grammatical and meaningful language.


Get our free MetaTrader 4 indicators – Put our free MetaTrader 4 custom indicators on your charts when you join our weekly newsletter


Get weekly commitment to traders’ reports Find out where the largest traders (hedge funds and commercial hedgers) are located in the futures markets on a weekly basis.


Screenshot showing a text dialog
The first computer system to engage people in dialogue was a psychotherapy program called Eliza, which was built over half a century ago.
Rosenfeld Media / FlickrAnd the CC BY

Early versions dating back at least to the 1950s, known as n-gram models, simply count the frequency of certain phrases and use them to guess which words are likely to occur in certain contexts. For example, it’s easy to tell that the phrase “peanut butter and jelly” is a more likely phrase than “peanut butter and pineapple.” If you have enough text in English, you will see the phrase “peanut butter and jelly” over and over but you may not see the phrase “peanut butter and pineapple”.

Today’s models, data sets, and rules that approximate human language differ from these early attempts in several important ways. First, they are basically trained entirely online. Second, they can learn the relationships between divergent words, not just adjacent words. Third, they are set by a large number of internal “knobs” – so many that it is difficult even for the engineers designing them to understand why they are generating one string of words rather than another.

However, the models’ mission remained the same as it had in the 1950s: to determine which word was likely to come next. Today, they are so good at this task that almost all the sentences they generate sound fluid and grammatical.

Peanut butter and pineapple?

We requested a large linguistic model, GPT-3, to complete the sentence “peanut butter and pineapple___”. She said, “Peanut butter and pineapple are a great combination. The sweet and savory flavors of peanut butter and pineapple complement each other perfectly.” If someone said this, they might conclude that they tried peanut butter and pineapple together, formed an opinion and shared it with the reader.

But how did GPT-3 come up with this paragraph? By creating a word that fits the context we have provided. Then another one. Then another one. The model never saw, touched, or tasted the pineapple – he just processed all the texts on the internet that he mentioned. However, reading this paragraph can lead the human mind – even the mind of a Google engineer – to imagine GPT-3 as an intelligent being that can think of peanut butter and pineapple bowls.

Large AI language models can engage in fluent conversation. However, they do not have a general message to communicate, so their phrases often follow common literary tropes, drawn from the texts in which they were trained. For example, if the topic “The Nature of Love” is prompted, the model might generate sentences about believing that love conquers all. The human brain prepares the viewer to interpret these words as the model’s opinion of the subject, but they are simply a reasonable sequence of words.

The human brain is embodied in deducing intentions from words. Every time you engage in a conversation, your brain automatically builds a mental model for your conversation partner. You then use the words they say to fill in the form with that person’s goals, feelings, and beliefs.

The process of jumping from words to mental model is seamless, as it runs every time you receive a complete sentence. This cognitive process saves you a lot of time and effort in everyday life, which greatly facilitates your social interactions.

However, in the case of AI systems, it errs – building a mental model out of thin air.

Further investigation can reveal the severity of this error. Consider the following claim: “Peanut butter and feathers taste great together because ____.” GPT-3 continued: “Peanut butter and feathers taste great together because they both have a nutty flavor. Peanut butter is also smooth and creamy, which helps offset the texture of the feathers.”

The text in this case is as fluent as our example with pineapple, but this time the model says something definitely less logical. One begins to suspect that GPT-3 has never tried peanut butter and feathers.

Attributing intelligence to machines and depriving humans of it

The sad irony is that the same cognitive bias that causes people to attribute humanity to GPT-3 can cause them to treat actual humans in inhuman ways. Sociocultural linguistics – the study of language in its social and cultural context – shows that assuming a very close link between free expression and free thinking can lead to biases against people who speak differently.

For example, people with a foreign accent are often Seen as less intelligent They are less likely to get the jobs for which they are qualified. Similar biases exist against dialect speakers which is not considered prestigious, Like Southern English In the United States, against Deaf people use sign languages And against people with speech disabilities like stuttering.

These prejudices are extremely harmful, often leading to racist and sexist assumptions, and have been shown time and time again to be unfounded.

Fluent language alone does not mean humanity

Will artificial intelligence become conscious? This question requires deep study, and philosophers have already done so Meditation He. She for decades. What the researchers decided, however, is that you simply cannot trust a language model when it tells you how you feel. Words can be misleading, and it is very easy to confuse fluent speaking with fluent thinking.Conversation

About the authors:

Kyle MawaldAssistant Professor of Linguistics, The University of Texas at Austin College of Liberal Arts And the Anna A. IvanovaPhD candidate in Brain and Cognitive Sciences, Massachusetts Institute of Technology (MIT)

This article has been republished from Conversation Under a Creative Commons License. Read the original article.



Source link

Add a Comment

Your email address will not be published.