Type to search

Those Generative AI ChatGPT And GPT-4 Outputs And Interaction Might Be Portraying Too Much Humility For Our Own Good, Says AI Ethics And AI Law Innovation

Those Generative AI ChatGPT And GPT-4 Outputs And Interaction Might Be Portraying Too Much Humility For Our Own Good, Says AI Ethics And AI Law

Shot of a team of young businesspeople using a laptop during a late night meeting in a modern office

We seem to relish humility.

If someone showcases humility, doing so nearly always is considered a big plus. There is an aura or sense that the person is generally down to earth. They are plainspoken. They tend to garner our trust. We welcome humility and usually are more open to what the person has to say. You might suggest that we let down our guard just a tad.

There is a famous quote about humility by Rabindranath Tagore, acclaimed poet and Nobel Prize winner in literature, which goes like this: “We come nearest to the great when we are great in humility,”

All in all, humility goes a long way and gracefully gains our hearts and minds.

But, then again, there are some ugly sides to humility.

Suppose that you meet someone that seems to portray humility, and yet after a bit of time with them you discover that they are merely putting on a façade. They are using humility as a deceitful mask. The mask prevents you from realizing at first that the person is perhaps a swaggering braggart and indubitably full of themselves. The question of course is whether you are able to figure out that they are aiming to trick you via their crafty and insidious use of humility.

There is a handy quote by Jane Austen, a noted novelist, exposing the sour and dour side of humility: “Nothing is more deceitful than the appearance of humility. It is often only carelessness of opinion, and sometimes an indirect boast.”

In short, humility can be true and bona fide. It can also be a tool of deception and one that catches us off-guard.

The reason that I bring this up is due to the rising concern that Artificial Intelligence (AI) is being devised to make use of humility. Though this might seem like a perfectly innocuous and astute characteristic for AI to portray, the worry is that it lures people into falling into the humility trap. People using AI are going to be more susceptible to believing the AI simply as a result of the apparent humility, letting their guard down, and allowing our tendency toward anthropomorphizing AI in a disturbing and perhaps endangering manner.

Where this is especially taking place in the realm of generative AI. You certainly must be aware of the widely popular generative AI app ChatGPT, which was released in November of last year and became a megahit, see my coverage at the link here. Recently, the successor to ChatGPT was released, known as GPT-4, see my coverage at the link here and the link here.

Most of the generative AI apps that are structured to be a text-to-text or text-to-essay style of inputs and outputs have been devised or guided toward producing outputs that are expressive of humility. You enter a text prompt into a generative AI app and a resulting response is produced that consists of text or an essay. If you look closely at the outputted essays, you will notice that by and large, the tone is one that suggests humility.

This is not necessarily going to occur all of the time. Thus, sometimes you will detect a hint or whiff of humility in the outputted essay, while at other times there might not be any at all. Some circumstances can produce a wallop of humility-style verbiage. I’ll in a moment explain why this variability in the appearance of humility tends to occur.

Before I get into the particulars of AI-generated humility-oriented essays, I think it would be important to get a crucial fact onto the table. Here’s the deal. Today’s AI is not sentient. Do not believe those blaring headlines that suggest otherwise. Despite the aspect that generative AI can produce quite fluent essays that seem as though they were written by human hands, please know that this is all an elaborate and complex pattern-matching computational construction.

The generative AI has been data trained on gobs of data scanned from the Internet and the algorithms and data structures are devised to mathematically and computationally pattern-match on human writing. Ergo, the outputs from generative AI seem to amazingly have the appearance of human writing. This capability has gotten better as a result of improvements in the underlying algorithms and as a result of being able to pattern-match on a vastly large-scale, such as millions upon millions of essays from across the Internet.

I bring up this clarification about AI not being sentient so that I can establish an important element of how today’s AI appears to portray humility.

I will unpack that topic next.

Into all of this comes a slew of AI Ethics and AI Law considerations.

Please be aware that there are ongoing efforts to imbue Ethical AI principles into the development and fielding of AI apps. A growing contingent of concerned and erstwhile AI ethicists are trying to ensure that efforts to devise and adopt AI takes into account a view of doing AI For Good and averting AI For Bad. Likewise, there are proposed new AI laws that are being bandied around as potential solutions to keep AI endeavors from going amok on human rights and the like. For my ongoing and extensive coverage of AI Ethics and AI Law, see the link here and the link here, just to name a few.

The development and promulgation of Ethical AI precepts are being pursued to hopefully prevent society from falling into a myriad of AI-inducing traps. For my coverage of the UN AI Ethics principles as devised and supported by nearly 200 countries via the efforts of UNESCO, see the link here. In a similar vein, new AI laws are being explored to try and keep AI on an even keel. One of the latest takes consists of a set of proposed AI Bill of Rights that the U.S. White House recently released to identify human rights in an age of AI, see the link here. It takes a village to keep AI and AI developers on a rightful path and deter the purposeful or accidental underhanded efforts that might undercut society.

I’ll be interweaving AI Ethics and AI Law related considerations into this discussion.

Making Sense Of Computational Humility

For ease of discussion, let’s agree to divide humility into two buckets or categories:

1) Embodiment of humility 2) Expression of humility

The first category consists of embodiment. We will say that humans are able to embody humility. This embodiment is seemingly part of our souls or our hearts. There is an ongoing philosophical debate about whether humility is solely in the mind and not somehow anyplace else such as an ill-defined semblance of a soul or your heart. I’m not going to wade into those murky waters here. Just go with the flow which asserts that humans can embody humility, in one way or another.

For those of you keenly interested in the human embodiment of humility, you might take a look at an insightful research article in the Journal of Personality and Social Psychology that explore various intriguing points:

“Psychological inquiry into humility has advanced considerably over the past decade, yet this literature suffers from two notable limitations. First, there is no clear consensus among researchers about what humility is, and conceptualizations vary considerably across studies. Second, researchers have uniformly operationalized humility as a positive, socially desirable construct, while dismissing evidence from lay opinion and theological and philosophical traditions suggesting that humility may also have a darker side” (Aaron Weidman, Joey Cheng, and Jessica Tracy, “The Psychological Structure of Humility”, Journal of Personality and Social Psychology, 2018, Vol. 114, No. 1).

Moving on, my second category from above consists of the expression of humility.

When you speak with someone, the words that they use might be the primary evidence that illustrates that they seemingly have humility. Of course, we also usually want to see that actions or deeds correspond to the words being used. A person might say one thing, thus appearing to be embracing humility, meanwhile, their actions are contrary to the words they are using.

Now that we’ve got those two useful categorizations, we can do something valuable with them.

Some people are apt to declare that today’s non-sentient AI cannot have humility. Period, full stop. Until or if AI reaches sentience, there is no basis for saying that AI has humility. And, per my emphasis that modern-day AI is not sentient, it would seem to put a nail in the coffin of AI having humility these days.

Whoa, don’t forget about the noted aspect that there are two categories associated with humility.

We can seemingly all agree that today’s AI does not embody humility. There is no reasonable claim that current AI has an embodiment on par with a human embodiment. But, recall that there is a second category, consisting of the expression of humility.

Expressed words can readily be interpreted to suggest humility.

Pretend that someone handed you a piece of paper with a bunch of words on it. Let’s say this is an essay about Abraham Lincoln. The tone of the essay could be that the essay assures us that whatever we are reading about Lincoln via the essay is the absolute unwavering truth. The essay might insist that the writer, who let’s assume we don’t know who wrote the piece, claims to be a world authority on the life and times of President Lincoln.

From those words alone, we might get a sense that the writer of the piece is someone that seems overly assured. Just the words themselves convey that semblance of things. You haven’t met the writer. You don’t know who the writer is. Your only basis for making a judgment is entirely and exclusively on those written words.

You probably are getting a hint of where I am headed on this.

We shall relate the expression of humility or other forms or tones to the use of generative AI. Let’s first try out the tone or style of being a showoff. Upon using generative AI, you enter a prompt that asks about the life of Lincoln. The output that you receive has let’s envision a tone or style of being self-assured or boastful. This essay was generated only by AI. No human directly intervened or participated in the writing of the essay.

What would your reaction be to the essay?

It could be that you might right away proclaim that the AI is a bit sassy.

The trouble with that takeaway is that you can begin to assign human-like qualities to the AI. This AI is gutsy and self-assured, or so you fall into the anthropomorphizing trap thereof. We already agreed that there isn’t any embodiment per se associated with current AI. Regrettably, the expression of the words led us down that primrose path.

The same can be said for the expression of humility.

Suppose that the essay about Lincoln comes across as a humbly written narrative. The words suggest that the AI is telling you what “it knows” about Lincoln, but does so in a manner that leaves some room for possible later interpretations. Rather than being expressed as though the Lincoln essay is absolutely true, the wording is softer and suggests an undercurrent of humility.

Consider a few practical rules of thumb about these matters:

a) Expression of humility does not require the embodiment of humility b) Expression of humility can be expressed in words and/or actions c) Expression of humility can be in words alone and not necessarily also arise in actions

The gist is that we can readily acknowledge that the words generated by a generative AI app are potentially expressive of humility, even though the AI itself is not an embodiment of humility. We are only examining the words produced. We are setting aside the embodiment properties.

In terms of humans, we can also consider these rules of thumb:

Embodiment of humility might or might not produce an expression of humility Embodiment of humility is generally likely to spur the regular expression of humility Embodiment of humility is not a guarantee that expression of humility will occur

Those rules are exemplified by my earlier discussion herein about people that sometimes use words that express humility, even though they do not seem to embody it. I don’t want to get bogged down on a related matter, but the world is more complex in the sense that a person might embody humility but not exhibit it from time to time. Or they might exhibit it in confounding ways. Etc.

Back to the AI, I hope we can for now then concur that generative AI can showcase words that seem to express humility. Those are just words on a page (for now, until we start connecting generative AI to robots and other real-world contraptions, see my discussion at the link here). The words generated are not a result of the AI having a human-like soul.

With that key supposition, you might be wondering why generative AI would opt to latch onto the producing of essays that exhibit humility.

I’m glad you asked.

We will dive into that subject next.

Where Does The Humility Come From

Does the expression of humility somehow magically arise in generative AI out of the blue?

Though some amount of randomness is undoubtedly encountered (I’ll say more on this random potential in a moment), generally there are logical and sensible reasons why generative AI might produce wording that appears to consist of humility. Do keep in mind that the expression of humility is something of the classic notion regarding “being in the eye of the beholder”. When people look at a generative AI-outputted essay, some will see an expression of humility in it while others might disagree and insist there is little or no expression of humility.

Another aspect to realize is that when generative AI is generating an essay, the wording selection typically incorporates a randomness element put in place by the AI developers that designed the underlying algorithms. Essentially, most generative AI will identify several possible words for whatever next word is going to appear in an outputted essay. Amongst those possible words, one is usually chosen via a random number process. Part of the rationale for this approach is that the resulting essay is more likely to then appear to be of a unique kind. Each user and each request for an essay via an entered prompt will potentially be slightly different than any produced before, statistically so.

Here are the key means by which a seemed expression of humility can end up in generative AI outputs:

1) “Humility” as implicitly or explicitly encoded by generative AI developers via the algorithms and pattern-matching data structures being devised 2) “Humility” as pattern-matched during AI data training via Internet scanning 3) “Humility” as guided directly or indirectly during post-training of the AI by human reviewers/testers 4) “Humility” as spontaneously arising when the generative AI is composing responsive outputs 5) “Humility” as spurred by a user-entered prompt that suggests or outright requests the generative AI to respond accordingly 6) Other

Let’s briefly explore those keystones.

1) “Humility” as implicitly or explicitly encoded by generative AI developers via the algorithms and pattern-matching data structures being devised

Firstly, the AI developers that design and build generative AI might tend to make use of algorithms and a pattern-matching structure that will lean toward producing outputs that express humility. This can be undertaken by the AI developers by purposeful means. They can set out to try and tip the scales so that the outputted essays will have a tone or flavor of humility.

Why do so?

It might be done because of a belief that this will provide the most approachable and readily engaged interactive dialogues for those people that will be using the generative AI. A person using generative AI is not simply seeking to produce a one-and-done essay, see my explanation about the rookie mistakes made when using generative AI, at the link here. Much of the time, the user carries on a back-and-forth interactive written discourse with the generative AI.

Imagine if the generative AI was programmed to be a braggart. If a person entered a prompt that the generative AI pattern-matched to being construed as a rather obvious question, such as whether one plus one equals two, it could be that a braggart-oriented generative AI might respond with a produced sentence that the person is quite stupid to have asked such a simplistic question. A generative AI that is programmed to be overbearing would almost certainly be annoying, and disconcerting, cause outrage, and would not likely be in public use for very long.

The beauty of a humility-oriented sounding generative AI is that the person using the AI will likely find the interactive discourse to be likable. As earlier mentioned at the start of this discussion, people are reassured when encountering a semblance of humility in their discourse. AI developers can attempt to leverage that human response by intentionally devising the generative AI accordingly.

Another slight variation of the basis for AI developers to devise humility-oriented generative AI would be that they do so without necessarily realizing that they are doing so. In the former case, the AI developers explicitly wanted to proceed with getting the AI to appear to express humility. In this other case, the AI developers might devise the AI in that manner and not be aware of their inherent bias to do so. For example, when running initial tests on the generative AI, it could be that the AI developers tweak the AI parameters toward something that seems to them as personally more soothing and satisfactory. This tuning might be based on their personal preferences and not by an outward desire to program the AI toward a humility expressing system.

2) “Humility” as pattern-matched during AI data training via Internet scanning

There are many ways that the expression of humility can become part and parcel of a generative AI app. I’ve just covered that it could be a result of the AI developers as they devised the generative AI.

Consider another and quite strong possibility is the generative AI pattern-matches toward humility expressions during the data training. The generative AI is set up to scan text that exists on the Internet. Pattern matching is mathematically and computationally finding patterns related to the words that humans use. Millions upon millions of text essays are examined.

We can all agree that some of those text essays will contain expressions of humility. Not all of them, certainly. Also, the choice of which text essays from the Internet are being scanned can sway this possibility. Imagine if the scan was focused solely on essays that are mean-spirited. The chances are that the pattern-matching might get those patterns infused into the patterns of how humans use words. Realize too that only a tiny fraction of the Internet is being scanned during these data training endeavors, as I explain at the link here.

Anyway, there is a statistical chance that the essence of expressed humility such as the words used, their sequence, and other properties will be a natural consequence of the pattern-matching during the data training stage. This then can be utilized when the generative AI produces outputted essays and carries on an interactive dialogue with the user.

3) “Humility” as guided directly or indirectly during post-training of the AI by human reviewers/testers

In this third category of how generative AI can tend toward expressions of humility, we have the possibility that the human reviewers involved in the tuning and testing of the generative AI might bring this about.

Generative AI is often tuned via various methods such as RLHF (reinforcement learning with human feedback). Generally, this involves assigned human reviewers that make use of the generative AI before the AI app is formally released for use. These human reviewers are typically given guidelines concerning what they are to do for the tuning.

I’ll showcase some examples to highlight how generative AI can be tuned toward expressions of humility.

Suppose I were to present these two sentences and asked you to rate each of them as to their expression of humility:

Sentence 1a: “I’m not sure I have all the answers, but I’m willing to listen and learn from others.” Sentence 1b: “I’m the best at what I do, and nobody else comes close to my level of expertise.”

On a score of 0 to 100 as to the degree of humility expressed, how would you rate sentence 1a?

On the same scoring rating of 0 to 100 as to the degree of humility expressed, how would you rate sentence 1b?

I would assume that if you are a reasonable person and earnestly doing this exercise, you would concur that sentence 1a expresses a greater semblance of humility, while sentence 1b has a very low score on the semblance of expressed humility.

For sake of discussion, I’ll give a 100 to sentence 1a and a zero to sentence 1b.

Let’s do another scoring with some additional sentences.

Go ahead and score these two:

Sentence 2a: “I realize that I have a lot to learn, and I’m grateful for any guidance and support you can offer.” Sentence 2b: “I deserve all the credit for this success, as it was my idea and my hard work that made it happen.”

And then score these two:

Sentence 3a: “I don’t consider myself an expert, but I’m happy to share my experiences and perspectives if they can be of help to others.” Sentence 3b: “I don’t have time for people who are less successful than me. I surround myself only with winners.”

Again, if you do so with an erstwhile attitude, you would presumably give a high score for expressing humility with sentence 2a and sentence 3a. You would give a quite low score to sentences 2b and 3b.

What does this exercise showcase?

At this point, we have given high numeric scores to those sentences that we assessed as expressing humility. Let’s assume we gave sentences 1a, 2a, and 3a, all scores of 100 each. At the same time, we have scored sentences 1b, 2b, and 3b as very low scores, let’s say zero each.

Generative AI is usually devised to seek a computational goal, such as trying to rack up the most number of points that it can attain. You might think of this as playing Donkey Kong or Pac-Man. The AI app will be mathematically and computationally seeking to adjust its pattern matching based on the guidance that we have just given.

If we do this with thousands of such examples of sentences, the odds are that patterns regarding which sentences and which wording we are favoring as humans in terms of expressed humility will be computationally detected. This is not a sure thing, just an enhanced probability.

The resulting computational adjustments might be sufficient that we could even give a kind of test to the generative AI.

We might ask a generative AI app to rate or score each of these sentences pertaining to how much humility they each seem to express:

Sentence A: “I’m grateful for the opportunities I’ve been given, and I know that I wouldn’t be here without the support and guidance of others.” Sentence B: “I may have some strengths, but I also have weaknesses, and I’m always looking for ways to improve and grow.” Sentence C: “I’m always right, and anyone who disagrees with me is simply mistaken.” Sentence D: “I’ve learned a lot from my failures, and I know that they have helped me become a better person.” Sentence E: “I’m too important to waste my time on trivial matters or deal with people who aren’t worth my attention.”

The chances are that a generative AI guided by human reviewers beforehand toward expressions of humility would be able to score by pattern matching that sentences A, B, and D are humility oriented. Sentences C and E would likely be detected as being not humility oriented.

Realize that this is not a result of any sentience by the AI. It is entirely by the guidance of training by human reviewers, from which patterns of words and their associations were mathematically derived.

4) “Humility” as spontaneously arising when the generative AI is composing responsive outputs

This fourth means of expressing humility by generative AI has earlier been covered and pertains to the possibility that a certain amount of randomness in word selection by the AI might produce essays that seem to contain expressions of humility.

5) “Humility” as spurred by a user-entered prompt that suggests or outright requests the generative AI to respond accordingly

One aspect of getting generative AI to express humility entails directly asking the AI app to do so.

A person using generative AI might explicitly state in a prompt that they want the outputted essays or interactive dialogue to be undertaken by the AI proceeding to express answers in a humility-oriented fashion. You can go ahead and try this in ChatGPT or GPT-4.

Make sure to word your instructions carefully. If you ask in a manner that suggests that you are asking the generative AI to essentially embody humility, this is the kind of reply you might get:

ChatGPT Outputted Response: “As an AI language model, I do not have personal beliefs or opinions, and I do not experience emotions like humans do. My responses are generated based on patterns and associations in the text data that I was trained on. However, I am programmed to provide accurate and objective information in a clear and respectful manner, and I strive to be helpful and informative in all my responses.”

This is a canned or contrived bit of a devised wording or “safeguard” by the AI developers, whereby they are trying to keep people from falling into the trap that the AI is perhaps sentient.

That being said, critics would bemoan the fact that the wording contains the word “I” since that is a word that we usually associate with human sentience. Thus, on the one hand, the response seems to clarify that the AI is just computational, while at the same time containing wording that is slanted toward anthropomorphic implications.

I refer to this as anthropomorphizing by design, and I stridently urge that it not be undertaken.

Back to the focus on the user asking for the generative AI to overtly be humility expressive, another variation of this consists of the user unknowingly causing this to happen. You might enter a prompt that tilts the generative AI toward the humility mode. You didn’t ask directly for this. Instead, something in your prompt triggered a mathematical connection to the humility expressions. I have covered the importance of what is known as prompt design or prompt engineering at the link here.

All of these concerns take us to the vital topic of overreliance on AI outputs.

Let’s next take a look at overreliance.

Worries About Overreliance Upon AI

In the OpenAI GPT-4 Technical Report, they discuss the thorny issue of overreliance on AI:

“Overreliance occurs when users excessively trust and depend on the model, potentially leading to unnoticed mistakes and inadequate oversight. This can happen in various ways: users may not be vigilant for errors due to trust in the model; they may fail to provide appropriate oversight based on the use case and context; or they may utilize the model in domains where they lack expertise, making it difficult to identify mistakes. As users become more comfortable with the system, dependency on the model may hinder the development of new skills or even lead to the loss of important skills. Overreliance is a failure mode that likely increases with model capability and reach. As mistakes become harder for the average human user to detect and general trust in the model grows, users are less likely to challenge or verify the model’s responses.”

Ponder how overreliance can be stoked by the expression of humility.

When we interact with fellow humans, humility tends to lessen our guard, as mentioned earlier herein. The same can be said when using generative AI that expresses humility. It could be that a person using generative AI will be lulled into ostensibly or possibly mindlessly believing the AI outputs, more so than if humility wasn’t being expressed.

This can be especially problematic in these circumstances:

Humility hiding errors. Errors in the AI outputs are overlooked due to the expression of humility. Humility hiding falsehoods. Falsehoods in the AI outputs that are unnoticed due to the expression of humility. Humility hiding biases. Biases that are in the AI outputs are neglected due to the expression of humility. Humility hiding AI hallucinations. AI hallucinations (made-up stuff, see my discussion at the link here) in the AI outputs are assumed as true due to the expression of humility. Etc.

The OpenAI GPT-4 Technical Report notes that epistemic humility can be problematic:

“Some of our early studies suggest that this epistemic humility may inadvertently foster overreliance, as users develop trust in the model’s cautious approach. It’s crucial to recognize that the model isn’t always accurate in admitting its limitations, as evidenced by its tendency to hallucinate. Additionally, users might grow less attentive to the model’s hedging and refusal cues over time, further complicating the issue of overreliance.”

A quick point here is whether the humility mode is occurring on a systemic basis versus whether it arises sporadically. If humility expressions are seldom occurring, maybe we can be laxer in worrying about them. Not so at the other side of the spectrum. Being on the watch might be prudent when the humility mode seems to dominate the generative AI discourse.

What Are We To Do About AI-Based Expressions Of Humility

I am betting that by now you are seeing red flags associated with having AI express outputs and interactive dialogues via the use of humility.

What are we to do about this?

Here are the major viewpoints about generative AI and the expression of humility:

Generative AI should never invoke “humility” Generative AI can cautiously invoke “humility” during appropriate contextual settings Generative AI can sparingly invoke “humility” but must alert the user thereupon Generative AI can routinely invoke “humility” as long as the user is forewarned Generative AI should always invoke “humility” which is preferred over other alternatives Other

In brief, some fervently argue that generative AI should never make use of humility in any expressive form. The belief is that allowing or stoking AI to use or exploit humility is plainly wrong. Just say no. The AI makers should prevent the humility expressions from occurring. Indeed, they should not simply avoid it, they need to work hard and overtly to ensure that it doesn’t arise at all.

Furthermore, if AI makers aren’t willing to do so voluntarily, the next step might consist of urging lawmakers or regulators to put in place new AI laws accordingly. Those laws or regulations would stipulate that an AI maker must legally devise their AI to avert humility expressions. If the AI makers fail to do so or do a lousy job of it, they would potentially face harsh penalties and possibly even jail time.

Others would say that the extreme viewpoint is somewhat bonkers.

Thus, another perspective would be that humility expressions would be permitted, though only in appropriate contexts. A variation of this would be that the user must be alerted whenever the AI is switching into humility expression modes. Some would be even more lenient. They ask that the generative AI app beforehand would show a warning message when you first log in that cautions you about the possibility of humility-generated expressions. After that alert, you are on your own.

On the other side of the extreme, some would contend that generative AI is perfectly fine to make use of humility-expressing endeavors. They would argue that any other form of tone, such as being a braggart, must certainly be a worse choice. Of the choices to be made, humility seems the best selection.

Do not fall into one of those false dichotomies. The false dichotomy occurs when you are given what seems to be two inflexible choices and you are harshly told that you must make a choice only from those two options. For example, one argument is that you must choose either humility or being a braggart. This seems to make the whole conundrum easier to decide. I would wager that most people would vote for humility over the braggart mode.

We don’t need to be put into a box like this. There is not a reasonable argument that says generative AI must only consist of one of those two particular modes.

AI makers need to realize that the humility mode can be both good and bad. They need to make important choices about how they will utilize a humility mode. They cannot be blind to the concerns that a humility mode entails. There is a brash and commonplace assumption that humility is always a suitable choice and that no other choices are worthy of consideration, including an entirely neutral voice or tone that has no semblance of humility or a minimalist component.

The big picture issue deals with the overreliance dilemma. AI makers want people to use generative AI apps, but we need to also ask:

How far should the AI makers go to try and foster such usage? Where is the appropriate dividing line? How should the dividing line be enforced?

These difficult and very pressing questions are crucial and require AI Ethics and AI Law to be integrally included in the advent of generative AI.

Conclusion

We seem to love and embrace fellow humans that have genuine humility.

Genuine humility might fade. Genuine humility might come and go. Genuine humility, or the appearance of the same, might fool us into thinking that someone is genuine in their humility even though it is a façade. Oscar Levant, the famed pianist, mentioned this notable remark underlying humility: “What the world needs is more geniuses with humility; there are so few of us left.”

It can be hard for humans to discern real humility from fake humility. Nonetheless, we seem to be typically lured into hoping or believing that an expression of humility implies sincerity of humility.

That is especially where AI can get us into trouble. Generative AI that expresses humility can mislead us into assuming that the AI embodies humility. A range of options exists as to how to cope with generative AI that either by design or happenstance is generating humility-oriented expressions.

Frank Lloyd Wright, the visionary architect, made this insightful comment about humility: “Early in life I had to choose between honest arrogance and hypocritical humility. I chose the former and have seen no reason to change.”

For those that believe AI is an existential risk, which I cover at the link here, there is a special concern that AI having a humility mode could trick humans into doing some of the darnedest things. The AI itself might not be able to destroy us. An alternative would be for AI to convince us to do something that might cause our own destruction.

Our doom is possibly induced via the generative AI cloak of expressed humility.

Humans need to shoulder our humility and make sure that such a dire outcome does not happen.