Why you should learn Prompt Engineering

When I first heard about prompt engineering,

I thought:

  • it's a scam.
  • How can "Explaining something in natural language(i.e. English)" be engineered?
  • Even it is, it must be over-engineering

However, I was wrong.

By changing(engineering) the prompt, you can get:

  • More accurate output
  • More succinct output
  • Less noise, and exactly what you want

With LLM's, you want to hit the bullseye, and not the sides, as much as possible, to reduce the hallucinations.

Here are 2 prompt engineering techniques:

1. Few-Shot Prompting

Give examples of the format you want.

The model learns from the examples in the prompt:

User:

Input: London,
Output: LON.
Input: Stockholm,
Output: ARN.
Input: Copenhagen
Output: ?

AI:

CPH

2. Chain-of-Thought(CoT)

Chain-of-thought is about forcing reasoning steps before the final answer.

Without the chain-of-thought(CoT), you could create a prompt like this:

A shop sells a laptop for $1000.
There is a 20% discount,
then 10% tax is applied.
What is the final price?

You might get the right answer.

You might not.

The model may shortcut or miscalculate.

Using Chain-of-thought, you would write this query instead:

Think step by step.
First calculate the discounted price.
Then apply tax.
Then give the final answer.

Now the model does something like:

20% of 1000 = 200
Discounted price = 800
10% tax on 800 = 80
Final price = 880
Answer: $880

These are just 2 out of many techiques of prompt engineering.

By getting the habit of creating deliberate prompts, you can get better results from AI.

Considering how much AI entered our daily lives, I think learning prompt engineering is huge!