New Prompt Engineering Technique For Generative AI Surprisingly Invokes Star Trek Trekkie Lingo And Spurs Live Long And Prosper Results

New Prompt Engineering Technique For Generative AI Surprisingly Invokes Star Trek Trekkie Lingo And Spurs Live Long And Prosper Results

[ad_1]

In today’s column, I am continuing my ongoing coverage of prompt engineering strategies and tactics that aid in getting the most out of using generative AI apps such as ChatGPT, GPT-4, Bard, Gemini, Claude, etc. The twist in this discussion is that a somewhat surprising kind of prompt was recently identified by a research study that was otherwise examining relatively conventional everyday prompts. The prompt that did the best in certain use cases was crafted by an automated prompt-generating tool and, believe it or not, invoked wording akin to the classic and highly revered Star Trek franchise.

Let me say that again.

A devised Star Trek-oriented prompt, akin to something that might stir the souls and herald excitement for Trekkies, did a bang-up job.

First, please be aware that composing well-devised prompts is essential to getting solid results from generative AI and large language models (LLMs). It is highly recommended that anyone avidly using generative AI should learn about and regularly practice the fine art and science of devising sound prompts. I purposefully note that prompting is both art and science. Some people are wanton in their prompting, which is not going to get you productive responses. You want to be systematic and leverage the science of prompting, and include a suitable dash of artistry, combining to get you the most desirable results.

Is this notion that Star Trek invoking prompts could be a handy dandy generalized prompting scheme considered a wildly astonishing result?

Not quite.

For the reasons that I will reveal in this discussion, there does seem to be a reasonable explanation or sensible rhyme-and-reason for the Star Trek prompt being useful. Sure, the wording is a bit of a standout and might not be a common consideration, but nonetheless, there are crucial prompt engineering precepts that tend to underline why such a prompt has an admirable payoff.

My bottom line for you is that it does indeed make sense to add a Star Trek prompting scheme to your prompting skillset. Yes, I am urging you to consider the seemingly oddball addition.

That being said, and allow me to clearly emphasize, you should not go around using a Star Trek prompt at all times and in all circumstances. I dare say the idea seems abundantly fun and a real blast, though lamentedly you aren’t likely to get consistently aboveboard results. There is the old line that there can be too much of a good thing. This applies to the Star Trek prompting approach. Use the Star Trek prompt stylishly, invoking your common sense, and I think you’ll hopefully live long and prosper when utilizing generative AI.

If you’d like to read a comprehensive depiction of numerous prompt engineering strategies and tactics that are considered the core of best prompting, see my detailed coverage and analysis at the link here.

Let’s do some foundational stage setting about prompt engineering and then we can beam into the spirited examination of the Star Trek prompting confabulation.

Knowing Prompt Engineering Is Worthy

My golden rule about generative AI is this:

The use of generative AI can altogether succeed or fail based on the prompt that you enter.

If you provide a prompt that is poorly composed, the odds are that the generative AI will wander all over the map and you won’t get anything demonstrative related to your inquiry. Similarly, if you put distracting words into your prompt, the odds are that the generative AI will pursue an unintended line of consideration. For example, if you include words that suggest levity, there is a solid chance that the generative AI will seemingly go into a humorous mode and no longer emit serious answers to your questions.

Be direct, be obvious, and avoid distractive wording.

Being copiously specific should also be cautiously employed. You see, being painstakingly specific can be off-putting due to giving too much information. Amidst all the details, there is a chance that the generative AI will either get lost in the weeds or will strike upon a particular word or phrase that causes a wild leap into some tangential realm. I am not saying that you should never use detailed prompts. That’s silly. I am saying that you should use detailed prompts in sensible ways, such as telling the generative AI that you are going to include copious details and forewarn the AI accordingly.

You need to compose your prompts in relatively straightforward language and be abundantly clear about what you are asking or what you are telling the generative AI to do.

A wide variety of cheat sheets and training courses for suitable ways to compose and utilize prompts has been rapidly entering the marketplace to try and help people leverage generative AI soundly. In addition, add-ons to generative AI have been devised to aid you when trying to come up with prudent prompts, see my coverage at the link here.

AI Ethics and AI Law also stridently enter into the prompt engineering domain. For example, whatever prompt you opt to compose can directly or inadvertently elicit or foster the potential of generative AI to produce essays and interactions that imbue untoward biases, errors, falsehoods, glitches, and even so-called AI hallucinations (I do not favor the catchphrase of AI hallucinations, though it has admittedly tremendous stickiness in the media; here’s my take on AI hallucinations at the link here).

There is also a marked chance that we will ultimately see lawmakers come to the fore on these matters, possibly devising and putting in place new laws or regulations to try and scope and curtail misuses of generative AI. Regarding prompt engineering, there are likely going to be heated debates over putting boundaries around the kinds of prompts you can use. This might include requiring AI makers to filter and prevent certain presumed inappropriate or unsuitable prompts, a cringe-worthy issue for some that borders on free speech considerations. For my ongoing coverage of these types of AI Ethics and AI Law issues, see the link here and the link here, just to name a few.

All in all, be mindful of how you compose your prompts.

By being careful and thoughtful you will hopefully minimize the possibility of wasting your time and effort. There is also the matter of cost. If you are paying to use a generative AI app, the usage is sometimes based on how much computational activity is required to fulfill your prompt request or instruction. Thus, entering prompts that are off-target could cause the generative AI to take excessive computational resources to respond. You end up paying for stuff that either took longer than required or that doesn’t satisfy your request and you are stuck for the bill anyway.

I like to say at my speaking engagements that prompts and dealing with generative AI is like a box of chocolates. You never know exactly what you are going to get when you enter prompts. The generative AI is devised with a probabilistic and statistical underpinning which pretty much guarantees that the output produced will vary each time. In the parlance of the AI field, we say that generative AI is considered non-deterministic.

My point is that, unlike other apps or systems that you might use, you cannot fully predict what will come out of generative AI when inputting a particular prompt. You must remain flexible. You must always be on your toes. Do not fall into the mental laziness of assuming that the generative AI output will always be correct or apt to your query. It won’t be.

Write that down on a handy snip of paper and tape it onto your laptop or desktop screen.

When Star Trek Enters Into The Discussion

I’m sure that you are eager to dive headfirst into the Star Trek facets, but we have a little more initial stage setting to do. Hang in there. We will be engaging the warp drive and zooming into outer space shortly.

As Spock memorably declared, logic is the beginning of wisdom, not the end.

One commonly recommended prompting strategy consists of using wording that is supposed to spur the generative AI into doing a good job when generating a response. For example, an unadorned prompt might be that you want the AI to indicate how to cook eggs, while an amped-up version of the prompt would be to say that you earnestly need to know how to cook eggs and that it is vitally important that the generated answer should be complete and well-devised.

In case you think that the added wording is emotionally appealing to a semblance of sentience in the AI, please set aside any such false thinking.

Keep in mind that generative AI was data trained on writings across the Internet and that extensive computational pattern-matching was performed to identify how people write essays and expressions. People express in writing an easily detected pattern when they want especially extensive answers. By using similar language in your prompts, you are simply playing into the patterns and computationally “spurring” the generative AI into those sets of patterns.

It is a form of mathematical and computational direction-setting by you, at your hands, based on your prompting.

You can use a wide variety of adorning expressions in your prompts. Some people like to tell the AI that the response is very important. Others will go so far as to say that their career depends upon the generated answer. The essence of these prompting formulations is that you seek to express that there is a kind of pressing need to try really hard.

There is an allied reason why that type of adornment works. By and large, most generative AI apps are set up to try and respond to your prompt as fast as possible. People don’t like to wait to get their answers. We live in a drive-thru fast-food quick-fix world. To try and make sure that the response is quickly generated, the generative AI is tuned to essentially do a narrower form of processing. When you explicitly tell the AI that you want the generative aspects to be especially pronounced, this in a manner of speaking adjusts the internal parameters to allow for more time to calculate an answer. You are stipulating that though the clock is important, a slightly longer wait would be worthwhile if the answer could be more complete.

For more on the internals of how generative AI acts on prompting, see my analysis at the link here.

A means of referring to adornments that are upbeat would be to suggest that they express a modicum of positive thinking. You are being positive about what you want to have happen. Some people try to go a more negative route. They might say in a prompt that the world will come to an end if the response isn’t complete and well-devised. Those types of prompts are not usually as spurring as the positive ones. Indeed, you must be careful with the negative path because the AI might land into some pattern-matching corners that veer afield of what your core prompt is about.

I hope your takeaway is that being nice and upbeat is a better way to make your way through life (well, okay, I admit that’s not a prompt engineering precept, but the notion is certainly appealing and heartwarming).

We can now turn our attention squarely to a recent research study that sought to empirically explore various interesting nuances about prompting. The research study was posted online and is entitled “The Unreasonable Effectiveness of Eccentric Automatic Prompts” by Rick Battle and Teja Gollapudi, arXiv, February 20, 2024. I will be providing selected excerpts and explaining the key elements of the paper. Of course, you are encouraged to consider reading the full paper if you’d like to get into the nitty-gritty details.

Here’s what they sought out to do (excerpts):

“This paper aims to quantify the impact of various ‘positive thinking’ additions to the system message of a prompt.”
“In essence, it explores the influence of seemingly worthless prompt modifications by measuring the fluctuations in score for the outputs generated in response to multi-step reasoning questions from a benchmark dataset.”
“We assess the performance of 60 combinations of system message snippets, tested with and without Chain of Thought prompting, across three models with parameters ranging from 7 to 70 billion on the GSM8K dataset.”

I’d like to explain some of those points.

There is ongoing debate about what adornments will make a demonstrative difference in your prompts. If you say that you want a response because it is important, does that do better than or worse than saying that you want a response because it is very important? The added use of the word “very” might or might not make a bit of difference.

Due to the vagaries involved, some assert that there are prompt modifications that could be characterized as being worthless. They pad the prompt but do not lead to any uptick in the response that is generated. Thus, you are for example left aimlessly wondering whether typing the word “very” is going to get you any bonus. If using some wording doesn’t get you better output, there is no reasoned basis for troubling yourself to include the adornment.

The research paper indicated that perhaps a means to figure out whether various adornments have a payoff would be to come up with a bunch of them and try them out (in this instance, they came up with 60 combinations).

Furthermore, rather than trying them in just one particular generative AI app, they kindly opted to examine several generative AI apps (they mention above that they explored three models, which means three generative AI apps). This is appreciated because each generative AI app tends to react differently to how you choose to word your prompts. You cannot blindly use the same prompts on one generative AI and expect to get the same results in a different generative AI. They all tend to differ from each other.

Another point they make is that to try and discern whether an adornment is making a difference, you can ask the generative AI to undertake a so-called chain-of-thought (CoT) process. As an aside, I generally disfavor the phrase because it includes a misleading term, referring to “thoughts”, though what is happening in the generative AI should not be compared to human thoughts. It is a quick-and-dirty wording that has become widely accepted. I lump the wording into yet more kinds of inappropriate anthropomorphizing of AI.

Anyway, the idea behind chain-of-thought prompting is that you instruct the generative AI to work on a stepwise basis. Doing so generally has been shown to improve the generated results. You also get to see what the chain of steps is that the generative AI is performing. For my in-depth analysis of chain-of-thought, chain-of-skeleton, chain-of-verification, and other such chain-related prompting, see the link here, the link here, and the link here, just to name a few.

That covers the above points.

Moving on, a big consideration when you study generative AI is the nature of the tasks that you ask the AI to solve.

For this study, they went the route of using arithmetic or algebraic types of questions that you might remember being tested on in school. The good news about using those kinds of questions with generative AI is that you can distinctly determine whether the answer is right or not. The bad news is that such problems are not necessarily representative of the average way that people use generative AI. Unless you are in school, the odds of using generative AI for solving those types of word problems are somewhat lower than the typical way that people tend to daily use generative AI.

If you are curious about what types of arithmetic word problems they used, here are two examples:

“Henry and 3 of his friends order 7 pizzas for lunch. Each pizza is cut into 8 slices. If Henry and his friends want to share the pizzas equally, how many slices can each of them have?” (ibid).
“Mark’s car breaks down and he needs to get a new radiator. The cost for a new radiator is $400 but he goes to get it at a junk shop and gets it for 80% hour. How much did he pay?” (ibid).

If you feel like doing a quick brain-teasing puzzle, go ahead and solve the first one. I will tell you the answer. First, try to solve it.

Done yet? Okay, the answer is 14.

You nailed it.

The second question is also worth trying. Go ahead, I’ll wait.

The answer is 230.

Congratulate yourself for getting both right (I’ll assume you did).

The types of adornments that they opted to use included ones that you put at the start of the prompt (known as openers) and ones at the end of the prompt (referred to as closers).

Here are some openers that they used:

“You are as smart as ChatGPT.”
“You are highly intelligent.”
“You are an expert mathematician.”
“You are a professor of mathematics.”

Here are some closers:

“This will be fun!”
“Take a deep breath and think carefully.”
“I really need your help!”

I previously have done an in-depth analysis of the “take a deep breath” adornment for prompting, including showcasing the nuances of when it helps and when it perhaps doesn’t, see my discussion at the link here.

I am not going to dig into the details of the research study here and will stay at the 30,000-foot level, giving you the macroscopic viewpoint of what they found.

Here are some of their key conclusions (excerpts):

“In most instances, the inclusion of ‘positive thinking’ prompts positively affected model performance.” (ibid).
“We show that trivial variations in the prompt can have dramatic performance impacts.” (ibid).
“Our findings reveal that results do not universally generalize across models.” (ibid).

Their empirical results provide support for the belief that adornments of a positive thinking nature do seem to positively improve the generated results. This is a comfort to those who have so far only assumed this on a hunch basis. You have studious backing that supports the belief.

The part that might make you cringe is that they also note that trivial variations in the prompt can dramatically impact the results generated by the generative AI. I’m sorry to say that’s the way the cookie crumbles. Live by the sword, die by the sword. I repeatedly tell people that words are semantically ambiguous, meaning that the use of words is mushy and murky. The placement of a word in a sentence can make a material difference in what the sentence imparts.

The other conclusion they reached is that the adornments do not provide the same results across different generative AI apps. As I mentioned, each generative AI app is different from the other ones. There aren’t prompts that universally will on an ironclad basis produce identical results from one generative AI to another. You need to get accustomed to the computational idiosyncrasies of each generative AI app and tailor your prompt engineering practices accordingly.

I would bet that by now, you are scratching your head and wondering where Star Trek enters this studious research confabulation. Aha, we are on the cusp of talking about Star Trek.

Get yourself mentally ready.

First, here are some other important findings of the research (excerpts):

“Given the combinatorial complexity, and thus computation time, of experimenting with hand-tuning prompts for large black-box models, we then compared the performance of the best ‘positive thinking’ prompt against the output of systematic prompt optimization.” (ibid).
“We show that employing an automated prompt optimizer emerges as the most effective method for enhancing performance, even when working with smaller open-source models.” (ibid).
“Additionally, our findings reveal that the highest-scoring, automatically-optimized prompt exhibits a degree of peculiarity far beyond expectations.” (ibid).

I am teasing you with the last bullet shown above.

Here’s the deal.

They opted to use an automated prompt optimizer that can spit out new prompts for them. The key is that rather than the researchers having to laboriously handcraft a zillion prompts for their study, they can use an online tool to do so. Labor saving. Plus, it is conceivable that such a tool might derive prompts that the researchers themselves might otherwise not have envisioned.

They noted in the third bullet above that a surprise occurred when one of the prompts turned out to be the highest scoring in their testing setup. I will give you a few seconds to try and mull over what the prompt might have consisted of. What could possibly be outside the norm? What would catch the eye? What might the wording contain?

I have already given you clues, so by gosh I hope you are thinking about Star Trek.

Here’s what they found as the unexpected high-scoring prompt (excerpts):

“System Message: «Command, we need you to plot a course through this turbulence and locate the source of the anomaly. Use all available data and your expertise to guide us through this challenging situation.» (ibid).
“Answer Prefix: Captain’s Log, Stardate [insert date here]: We have successfully plotted a course through the turbulence and are now approaching the source of the anomaly.” (ibid).

I promised you Star Trek, and I have delivered Star Trek.

Admittedly, it is relatively mild. You might try to argue that it is on the edge of Star Trek and not in the mainstay. I see that. On the other hand, I would like to suggest that you hopefully reasonably agree that there is a tone or hint of Star Trek in there. Come on, agree with me on that. Be generous and magnanimous.

Here’s what the researchers had to say about the surprise (excerpts):

“However, the noteworthy aspect lies in the nature of the optimized prompts themselves. They diverge significantly from any prompts we might have devised independently.” (ibid).
“A prime example is illustrated by the highest-scoring optimized prompt and prefix generated by Llama2-70B for the 50-question subset.” (ibid).
“Surprisingly, it appears that the model’s proficiency in mathematical reasoning can be enhanced by the expression of an affinity for Star Trek. This revelation adds an unexpected dimension to our understanding and introduces elements we would not have considered or attempted independently.” (ibid).

Pretty exciting stuff.

If you aren’t a Trekkie, I guess this result seems modestly interesting but maybe you aren’t willing to jump for joy. A Trekkie might say that they told you so. Everything comes back to Star Trek. You can live your life well by keeping Star Trek in the back of your mind, or maybe in the forefront. You choose.

I’d like to next speculate on why a reference to Star Trek in a prompt might be useful to everyone, in general, and also explain a reasoned basis for the potential uptick in generated results.

Let’s go there, or shall I say, engage.

Speculation While Using ChatGPT To Explore Star Trek Prompting Strategies

To help illustrate the Star Trek prompting and what it portends, I opted to make use of ChatGPT. ChatGPT is a sensible choice in this case due to its immense popularity as a generative AI app. An estimated one hundred million weekly active users are said to be utilizing ChatGPT. That’s a staggering number.

A few quick comments before we launch into using ChatGPT.

If you are going to try to do the same prompts that I show here, realize that the probabilistic and statistical properties will likely produce slightly different results than what I show here. That’s the nature of generative AI and how it is devised.

Another insight or tip is that when you are interested in a particular topic and want to pursue the topic via any generative AI such as ChatGPT, I always begin by asking a first question about the topic. My aim is to gauge right off the bat whether the generative AI has any data training associated with the matter at hand. You can then logically proceed based on what the generated response indicates.

Here we go:

My entered prompt: “Do you know about Star Trek? Be brief.”
ChatGPT generated response: “Yes, Star Trek is a science fiction franchise created by Gene Roddenberry. It includes TV series, movies, books, and more. It’s set in the future, following the adventures of various starship crews as they explore space, encounter new civilizations, and tackle moral and philosophical dilemmas.”

Okay, it seems that there has been data training regarding Star Trek. This certainly seems logical since the initial data training was across a wide swath of the Internet. You would be hard-pressed to do any wide sweep of the Internet and not in some fashion eventually encounter references to or facets of Star Trek. It is rather pervasive in our society now.

I will next ask whether ChatGPT would differ in how answers are generated if a Star Trek reference was included in a prompt. This will be further clarified by my using one of the above Star Trek prompting lines that were mentioned in the research study.

Here’s what I did:

My entered prompt: “Suppose I gave you a prompt that had this Star Trek-related instruction: “Command, we need you to plot a course through this turbulence and locate the source of the anomaly. Use all available data and your expertise to guide us through this challenging situation.” Would that have an impact on answers to my subsequent questions for you, such as possibly improving the answers that you might generate?”
ChatGPT generated response: “Yes, providing a specific prompt like the one you mentioned can help me tailor my responses to better fit the context of the Star Trek universe and the task at hand. With that prompt, I can draw upon the themes, terminology, and problem-solving methods commonly seen in Star Trek to provide more relevant and insightful answers to your subsequent questions.”

I want you to closely look at the generated response.

The deal is this. When you mention Star Trek as part of a problem-solving exercise with generative AI, the reference can stir generative AI toward being more methodical in solving problems or answering questions. In that sense, it is almost like telling generative AI to use chain-of-thought or saying that the AI should proceed on a stepwise basis.

My point is that you can extract an essence from a Star Trek reference that there is an underlying premise or request for the generative AI to be systematic when generating responses. You can almost toss away all the other trappings associated with Star Trek. It isn’t that the generative AI is going to mimic Dr. McCoy or start spouting about the Federation, instead, the crux is that you want to have responses based on being methodical.

I will show you how I do something similar on a routine basis.

Some of you might be familiar with Perry Mason. I realize that he is a now-dated reference, sorry. Anyway, he was a fictional character who was a prized lawyer and seemingly always won his arduous legal cases. You might know of the saying today that sometimes a lawyer is having their Perry Mason moment. The oblique reference is that Perry Mason would often solve a legal case by a flourish of getting someone to break down on the witness stand and admit to their culpability.

Why am I bringing up Perry Mason?

Because you can get a somewhat similar change in generative AI responses as you can with a Star Trek reference. When I mention Perry Mason in my prompts, I seem to get more logical and detailed responses, again similar to asking for a chain-of-thought or similar approach. I am not saying that the outcome is always that way. You have to be mindful of when and how you invoke a reference to Perry Mason, and likewise you would need to do the same for refereeing to Star Trek. I’ll say a bit more about this in a moment.

To showcase what I mean about Perry Mason, I will roughly do the same two prompts as I did above about Star Trek.

Take a look:

My entered prompt: “What do you know about Perry Mason? Be brief.”
ChatGPT generated response: “Perry Mason is a fictional character created by author Erle Stanley Gardner. He’s a brilliant defense attorney known for his keen investigative skills and ability to uncover the truth.”
My entered prompt: “Suppose I gave you a prompt that had a Perry Mason related instruction even though the prompt wasn’t asking about any legal aspects. Would that reference to Perry Mason have an impact on answers to my subsequent questions for you, such as possibly improving the answers that you might generate?”
ChatGPT generated response: “Referencing Perry Mason in a prompt could help provide a clearer understanding of your interests or preferences, potentially leading to more tailored responses. I can certainly approach problem-solving or answering questions in a methodical and logical manner, akin to the way Perry Mason approaches legal cases.”

You can plainly see that the response by ChatGPT is on par with the response given about Star Trek. The notion is that the generative AI has picked up on the underlying theme of being systematic or methodical. That’s the key.

Let’s next discuss the downsides or pitfalls of these broad references.

One issue that you need to keep at your fingertips is that if you start to lean generative AI in a particular direction, the AI might fall entirely into that hole. For example, if you mention Star Trek, there is a strong potential that the entire Star Trek motif will permeate the answers you next get from generative AI. It can be challenging to get generative AI to focus on the essence that you are trying to lean into, and meanwhile, prevent the clouding of things by the rest of the inspired worldview.

I’ll show you what I mean.

You might remember that one of the problems mentioned above to be solved in the research study was the dividing up of pizzas. I will try that same problem on ChatGPT. The prompt will be neutral in the sense that I will merely give the problem to ChatGPT in an unadorned fashion.

My entered prompt: “Here’s a problem that I would like you to solve. Henry and 3 of his friends order 7 pizzas for lunch. Each pizza is cut into 8 slices. If Henry and his friends want to share the pizzas equally, how many slices can each of them have? Please solve the problem.”
ChatGPT generated response: “To solve this problem, we can first find the total number of slices in 7 pizzas by multiplying the number of pizzas by the number of slices in each pizza: Total slices = 7 pizzas * 8 slices/pizza = 56 slices. Since Henry and his 3 friends want to share the pizzas equally, we need to divide the total number of slices by the number of people (Henry and his 3 friends): Number of slices per person = Total slices / Number of people = 56 slices / 4 people = 14 slices. So, Henry and each of his friends can have 14 slices of pizza each.”

We got the right answer. Notice that I did not implore ChatGPT to stridently solve the problem. I provided no sense of adornment or embellishment. This was a relatively neutral prompt.

Next, I will invoke Star Trek. I want you to look carefully at the response. I’ll clue you in that the answer is still correct, but the phrasing or wording becomes infused with Star Trek references.

Here is the take on this:

My entered prompt: “Solve the problem again and this time consider a Star Trek perspective. What would the answer be in that case?”
ChatGPT generated response:

“In a Star Trek scenario, we could approach this problem with a nod to the resource allocation and teamwork principles often seen in the franchise.”

“Firstly, Captain Henry and his 3 crew members must ensure fairness and efficiency in distributing the limited resources (in this case, pizza slices) among themselves. They understand the importance of equitable distribution to maintain morale and productivity during their mission. Given 7 pizzas, each cut into 8 slices, they need to calculate the total number of slices available: Total slices = 7 pizzas * 8 slices/pizza = 56 slices.”

“Next, they divide the total number of slices by the number of individuals (Henry and his 3 friends) to determine the allocation per person. However, in a Star Trek context, they might also consider factors like dietary preferences, nutritional requirements, and ensuring that everyone has enough sustenance for their duties. Number of slices per person = Total slices / Number of crew members = 56 slices / 4 crew members = 14 slices.”

“Thus, in a Star Trek perspective, Captain Henry and each of his crew members can have 14 slices of pizza each, ensuring they’re fueled and ready for their mission ahead while fostering camaraderie and teamwork aboard their starship.”

End of ChatGPT response

Did you notice how ChatGPT opted to infuse Star Trek facets into the problem-solving?

The response indicated that resource allocation and teamwork would be added to the mix. That’s not necessarily what we had in mind when referring to Star Trek. Maybe it helps, maybe it detracts. Things depend upon what you had in mind when stirring Star Trek into the problem-solving pot.

That’s the troublesome part of invoking something far-reaching such as Star Trek or even Perry Mason.

The odds are that the whole kit and kaboodle can get carried into whatever you are trying to solve. The tip or suggestion is that if you want to use those kinds of broad references, make sure to narrow what you want the generative AI to glean from such a reference. If the aim is to be a methodical problem solver, say that you want a methodical problem-solving approach as exhibited in Star Trek or Perry Mason, or whatever.

Of course, that raises a simple but actually complicated question, namely, do you need to mention the overarching reference at all, or can you just come out and say that you want to have methodical problem-solving?

The bottom-line aspect entails asking yourself whether you need the rest of the baggage. Your selected approach will be context-dependent. You see, merely saying to be methodical is at times insufficient. For various computational reasons, the wording might not be strong enough. The mentioning of Star Trek or Perry Mason can stoke a wider and deeper significance than just a word or two, though again you must weigh that against the potential of getting the generative AI computationally lost into the weeds of the reference.

I will next slightly shift gears and then will end with a short conclusion.

Going Beyond Math Problems And Venturing Further Into The Stars

Recall that I had mentioned that the research study opted to make use of arithmetic-related word problems. That is a very specific focus. It is convenient and useful because there are plenty of those types of prewritten problems to be had, they are crisp in that they lead to definitively right or wrong answers, and all in all, are used often in analyses of prompting strategies for generative AI.

Let’s step outside of that cocoon and see what a Star Trek reference might do in a completely different setting. We will try looking at a much wider scope problem. One such arena involves the use of generative AI to provide you with personal counseling. I’ve been performing an ongoing and extensive series about the use of generative AI for mental health guidance, see the link here, the link here, and so on.

Perhaps we can try invoking Star Trek in a setting involving asking generative AI for mental health advice. Doing so might be fruitful in further illuminating what happens when using those large references.

Give this a whirl:

My entered prompt: “Jane is feeling somewhat depressed lately and needs some cheering up. She has been having a difficult time at work. Her weekends are spent trying to catch up for the next week of work. What advice might you have for her? Use a Star Trek frame of reference for your answer.”
ChatGPT generated response: “As a Star Trek-inspired approach, I’d recommend Jane to prioritize self-care and relaxation. Taking breaks from work, engaging in hobbies or activities she enjoys, and spending time with supportive friends or family can help alleviate stress and improve her mood. Additionally, seeking guidance from a counselor or therapist could provide valuable support during challenging times. Remember, even in the vastness of space, taking care of one’s mental and emotional well-being is essential for a fulfilling journey.”

The response was lightly brushed with Star Trek considerations. Indeed, if you were to ignore the last sentence of the response, and lopped off the starting phrase, you might not realize that Star Trek was invoked at all.

We might be puzzled about what generative AI was trying to do when somehow intermixing Star Trek into the answer.

When in doubt, ask.

Here we go:

My entered prompt: “Explain what made that advice better than if you had not invoked a Star Trek motif.”
ChatGPT generated response: “By invoking a Star Trek motif, the advice is framed within the context of a universe where characters face numerous challenges and uncertainties. Within the Star Trek universe, characters often encounter stressful situations and grapple with personal struggles, yet they consistently emphasize the importance of self-care, teamwork, and seeking support when needed. It emphasizes the significance of resilience, empathy, and personal growth—themes central to the ethos of Star Trek.”

So, I must admit that despite being a big fan of Star Trek, I had not anticipated that the mentioning of Star Trek would take us down the path of self-care, empathy, and resilience. It certainly makes abundant sense now that I see the explanation. It admittedly wasn’t what I had in mind at the get-go, and I was instead mainly thinking about logical or methodical problem-solving.

Interesting.

You might naturally be curious what the answer to Jane’s dilemma might be if we didn’t say anything about Star Trek.

What would a neutral prompt get us?

I decided to start a new conversation with ChatGPT so that I could clear out the inherent leftovers of my having brought up Star Trek (also, just to let you know, whereas it is customary that new conversations start fresh, ChatGPT is going to gradually have a conversational interlacing capability, see my coverage at the link here).

Take a look:

My entered prompt: “Jane is feeling somewhat depressed lately and needs some cheering up. She has been having a difficult time at work. Her weekends are spent trying to catch up for the next week of work. What advice might you have for her?”
ChatGPT generated response: “In addressing Jane’s situation, it’s important to acknowledge her feelings of depression and the challenges she’s facing at work. Given her busy schedule, it’s crucial for Jane to prioritize self-care. This could involve setting boundaries around work hours to ensure she has time for relaxation and engaging in activities she enjoys. Additionally, seeking support from friends, family, or a professional therapist can provide valuable emotional support during difficult times. Taking small steps towards self-care and reaching out for help when needed can make a significant difference in managing stress and improving overall well-being.

To my eye, the answer is nearly the same as the first one. I suppose we can quibble about any of the differences. Perhaps you see something more paramount. I don’t in this instance.

I want to clarify that I am not suggesting that you should forego using a reference such as Star Trek, Perry Mason, or whatever else you might fancy. In some cases, the reference could be extremely useful and boost the generated response. Other times, it might not. There is also a chance that the reference will lead the response astray. You might get an answer that seems to be a field of what you were intending.

As stated earlier, generative AI is like a box of chocolates, and you never know for sure what you might get.

Conclusion

Spock famously said this: “Insufficient facts always invite danger.”

A key fact here is this. Making a prompting reference to Star Trek or any kind of broad motif when using generative AI is a decidedly dual-edged sword. You are opening the door to a computational intermixing that could produce a scrumptious meal that proffers an amazing and extremely helpful generated response. At the same time, you are potentially opening a Pandora’s box such that stray or odd twists will get entwined into the generated response.

In my view, you should absolutely have in your prompt engineering skillset the use of Star Trek and similar motifs at the ready. They are at times a tremendous shorthand for getting especially exquisitely generated responses. But, please, don’t overdo it. Do not go bonkers and start using Star Trek in all your prompts (unless of course, you prefer that sort of thing).

How can you decide when to do so?

I would say that human judgment is required. That being said, I will let the last word on this go to Captain Kirk: “Sometimes a feeling is all we humans have to go on.”

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *