πŸš€ Master Prompt Engineering and building AI Agents in our NEW courses! Use PROMPTING20 for 20% off ➜ Enroll now
Prompt Chaining

Prompt Chaining

Introduction to Prompt Chaining

LLM의 μ•ˆμ •μ„±κ³Ό μ„±λŠ₯을 κ°œμ„ ν•˜κΈ° μœ„ν•΄ μ€‘μš”ν•œ ν”„λ‘¬ν”„νŠΈ μ—”μ§€λ‹ˆμ–΄λ§ 기법 쀑 ν•˜λ‚˜λŠ” μž‘μ—…μ„ ν•˜μœ„ μž‘μ—…μœΌλ‘œ λΆ„ν• ν•˜λŠ” κ²ƒμž…λ‹ˆλ‹€. μ΄λŸ¬ν•œ ν•˜μœ„ μž‘μ—…μ΄ μ‹λ³„λ˜λ©΄ LLM에 ν•˜μœ„ μž‘μ—…μ— λŒ€ν•œ ν”„λ‘¬ν”„νŠΈκ°€ ν‘œμ‹œλ˜κ³  κ·Έ 응닡이 λ‹€λ₯Έ ν”„λ‘¬ν”„νŠΈμ˜ μž…λ ₯으둜 μ‚¬μš©λ©λ‹ˆλ‹€. ν”„λ‘¬ν”„νŠΈ μ—°μ‡„λΌλŠ” κ°œλ…μœΌλ‘œ μž‘μ—…μ„ ν•˜μœ„ μž‘μ—…μœΌλ‘œ λΆ„ν• ν•˜μ—¬ ν”„λ‘¬ν”„νŠΈ μž‘μ—…μ˜ 연쇄λ₯Ό λ§Œλ“œλŠ” 것을 ν”„λ‘¬ν”„νŠΈ 체이닝 이라고 ν•©λ‹ˆλ‹€.

ν”„λ‘¬ν”„νŠΈ 체이닝은 맀우 μƒμ„Έν•œ ν”„λ‘¬ν”„νŠΈλ‘œ ν”„λ‘¬ν”„νŠΈλ₯Ό 보낼 경우 LLM이 μ²˜λ¦¬ν•˜κΈ° μ–΄λ €μšΈ 수 μžˆλŠ” λ³΅μž‘ν•œ μž‘μ—…μ„ μˆ˜ν–‰ν•˜λŠ” 데 μœ μš©ν•©λ‹ˆλ‹€. ν”„λ‘¬ν”„νŠΈ μ²΄μ΄λ‹μ—μ„œ ν”„λ‘¬ν”„νŠΈλŠ” μ΅œμ’… μ›ν•˜λŠ” μƒνƒœμ— λ„λ‹¬ν•˜κΈ° 전에 μƒμ„±λœ 응닡에 λŒ€ν•΄ λ³€ν™˜ λ˜λŠ” μΆ”κ°€ ν”„λ‘œμ„ΈμŠ€λ₯Ό μˆ˜ν–‰ν•©λ‹ˆλ‹€.

ν”„λ‘¬ν”„νŠΈ 체이닝은 더 λ‚˜μ€ μ„±λŠ₯을 λ‹¬μ„±ν•˜λŠ” 것 외에도 LLM μ• ν”Œλ¦¬μΌ€μ΄μ…˜μ˜ 투λͺ…성을 높이고 μ œμ–΄ κ°€λŠ₯μ„± 및 μ•ˆμ •μ„±μ„ λ†’μ΄λŠ” 데 도움이 λ©λ‹ˆλ‹€. 즉, λͺ¨λΈ μ‘λ‹΅μ˜ 문제λ₯Ό 훨씬 μ‰½κ²Œ λ””λ²„κ·Έν•˜κ³  κ°œμ„ μ΄ ν•„μš”ν•œ μ—¬λŸ¬ λ‹¨κ³„μ˜ μ„±λŠ₯을 λΆ„μ„ν•˜κ³  κ°œμ„ ν•  수 μžˆμŠ΅λ‹ˆλ‹€.

ν”„λ‘¬ν”„νŠΈ 체이닝은 LLM 기반 λŒ€ν™”ν˜• μ–΄μ‹œμŠ€ν„΄νŠΈλ₯Ό κ΅¬μΆ•ν•˜κ³  μ• ν”Œλ¦¬μΌ€μ΄μ…˜μ˜ κ°œμΈν™” 및 μ‚¬μš©μž κ²½ν—˜μ„ κ°œμ„ ν•  λ•Œ 특히 μœ μš©ν•©λ‹ˆλ‹€.

ν”„λ‘¬ν”„νŠΈ 체이닝 μ‚¬μš© 사둀

λ¬Έμ„œ QAλ₯Ό μœ„ν•œ ν”„λ‘¬ν”„νŠΈ 체이닝

ν”„λ‘¬ν”„νŠΈ 체인이닝은 μ—¬λŸ¬ μž‘μ—…μ΄λ‚˜ λ³€ν™˜μ„ μˆ˜λ°˜ν•  수 μžˆλŠ” λ‹€μ–‘ν•œ μ‹œλ‚˜λ¦¬μ˜€μ—μ„œ μ‚¬μš©ν•  수 μžˆμŠ΅λ‹ˆλ‹€. 예λ₯Ό λ“€μ–΄, LLM의 일반적인 μ‚¬μš© 사둀 쀑 ν•˜λ‚˜λŠ” 큰 ν…μŠ€νŠΈ λ¬Έμ„œμ— λŒ€ν•œ μ§ˆλ¬Έμ— λ‹΅ν•˜λŠ” κ²ƒμž…λ‹ˆλ‹€. 첫 번째 ν”„λ‘¬ν”„νŠΈλŠ” μ§ˆλ¬Έμ— λ‹΅ν•˜κΈ° μœ„ν•΄ κ΄€λ ¨ μΈμš©λ¬Έμ„ μΆ”μΆœν•˜κ³  두 번째 ν”„λ‘¬ν”„νŠΈλŠ” μ£Όμ–΄μ§„ μ§ˆλ¬Έμ— λ‹΅ν•˜κΈ° μœ„ν•΄ 인용문과 원본 λ¬Έμ„œλ₯Ό μž…λ ₯으둜 λ°›λŠ” 두 개의 μ„œλ‘œ λ‹€λ₯Έ ν”„λ‘¬ν”„νŠΈλ₯Ό λ””μžμΈν•˜λ©΄ 도움이 λ©λ‹ˆλ‹€. 즉, λ¬Έμ„œκ°€ μ£Όμ–΄μ‘Œμ„ λ•Œ μ§ˆλ¬Έμ— λ‹΅ν•˜λŠ” μž‘μ—…μ„ μˆ˜ν–‰ν•˜κΈ° μœ„ν•΄ μ„œλ‘œ λ‹€λ₯Έ 두 개의 ν”„λ‘¬ν”„νŠΈλ₯Ό λ§Œλ“€κ²Œ λ©λ‹ˆλ‹€.

μ•„λž˜μ˜ 첫 번째 ν”„λ‘¬ν”„νŠΈλŠ” 질문이 μ£Όμ–΄μ§„ λ¬Έμ„œμ—μ„œ κ΄€λ ¨ μΈμš©λ¬Έμ„ μΆ”μΆœν•©λ‹ˆλ‹€. κ°„λ‹¨ν•˜κ²Œ ν•˜κΈ° μœ„ν•΄ {{document}} λ¬Έμ„œμ— λŒ€ν•œ 자리 ν‘œμ‹œμžλ₯Ό μΆ”κ°€ν–ˆμŠ΅λ‹ˆλ‹€. ν”„λ‘¬ν”„νŠΈλ₯Ό ν…ŒμŠ€νŠΈν•˜λ €λ©΄ ν”„λ‘¬ν”„νŠΈ μ—”μ§€λ‹ˆμ–΄λ§ (opens in a new tab)에 λŒ€ν•œ 이 νŽ˜μ΄μ§€μ™€ 같은 Wikipedia λ¬Έμ„œλ₯Ό λ³΅μ‚¬ν•˜μ—¬ 뢙여넣을 수 μžˆμŠ΅λ‹ˆλ‹€. 이 μž‘μ—…μ—λŠ” 더 큰 μ»¨ν…μŠ€νŠΈκ°€ μ‚¬μš©λ˜κΈ° λ•Œλ¬Έμ— OpenAI의 gpt-4-1106-preview λͺ¨λΈμ„ μ‚¬μš©ν•˜κ³  μžˆμŠ΅λ‹ˆλ‹€. 이 ν”„λ‘¬ν”„νŠΈλŠ” Claude와 같은 λ‹€λ₯Έ κΈ΄ μ»¨ν…μŠ€νŠΈμ˜ LLMκ³Ό ν•¨κ»˜ μ‚¬μš©ν•  수 μžˆμŠ΅λ‹ˆλ‹€.

Prompt 1:

You are a helpful assistant. Your task is to help answer a question given a document. The first step is to extract quotes relevant to the question from the document, delimited by ####. Please output the list of quotes using <quotes></quotes>. Respond with "No relevant quotes found!" if no relevant quotes were found.
####
{{document}}
####

λ‹€μŒμ€ user 역할을 μ‚¬μš©ν•˜μ—¬ μ „λ‹¬λ˜λŠ” μ§ˆλ¬Έμ„ ν¬ν•¨ν•œ 전체 ν”„λ‘¬ν”„νŠΈμ˜ μŠ€ν¬λ¦°μƒ·μž…λ‹ˆλ‹€.

Prompt Chaining Part 1

Output of Prompt 1:

<quotes>
- Chain-of-thought (CoT) prompting[27]
- Generated knowledge prompting[37]
- Least-to-most prompting[38]
- Self-consistency decoding[39]
- Complexity-based prompting[41]
- Self-refine[42]
- Tree-of-thought prompting[43]
- Maieutic prompting[45]
- Directional-stimulus prompting[46]
- Textual inversion and embeddings[59]
- Using gradient descent to search for prompts[61][62][63][64]
- Prompt injection[65][66][67]
</quotes>

이제 첫 번째 ν”„λ‘¬ν”„νŠΈμ—μ„œ λ°˜ν™˜λœ λ”°μ˜΄ν‘œλ₯Ό μ•„λž˜ 두 번째 ν”„λ‘¬ν”„νŠΈμ˜ μž…λ ₯으둜 μ‚¬μš©ν•  수 μžˆμŠ΅λ‹ˆλ‹€. λ”°μ˜΄ν‘œλ₯Ό 쑰금 더 정리할 수 μžˆμŠ΅λ‹ˆλ‹€. 즉, μΈμš©μ„ μ œκ±°ν•  수 μžˆμŠ΅λ‹ˆλ‹€. μ΄λŸ¬ν•œ μΈμš©λ¬Έμ€ μ œκ±°ν•˜κ±°λ‚˜ 체인의 λ‹€λ₯Έ ν”„λ‘¬ν”„νŠΈμ˜ μΌλΆ€λ‘œ ν™œμš©ν•  수 μžˆμ§€λ§Œ μ§€κΈˆμ€ λ¬΄μ‹œν•©λ‹ˆλ‹€. 그러면 두 번째 ν”„λ‘¬ν”„νŠΈλŠ” ν”„λ‘¬ν”„νŠΈ 1μ—μ„œ μΆ”μΆœν•œ κ΄€λ ¨ μΈμš©λ¬Έμ„ 가져와 λ¬Έμ„œμ™€ μΆ”μΆœλœ μΈμš©λ¬Έμ„ λ°”νƒ•μœΌλ‘œ μ§ˆλ¬Έμ— λŒ€ν•œ μœ μš©ν•œ 닡변을 μ€€λΉ„ν•©λ‹ˆλ‹€. 두 번째 ν”„λ‘¬ν”„νŠΈλŠ” λ‹€μŒκ³Ό 같을 수 μžˆμŠ΅λ‹ˆλ‹€:

Prompt 2:

Given a set of relevant quotes (delimited by <quotes></quotes>) extracted from a document and the original document (delimited by ####), please compose an answer to the question. Ensure that the answer is accurate, has a friendly tone, and sounds helpful.
####
{{document}}
####
<quotes>
- Chain-of-thought (CoT) prompting[27]
- Generated knowledge prompting[37]
- Least-to-most prompting[38]
- Self-consistency decoding[39]
- Complexity-based prompting[41]
- Self-refine[42]
- Tree-of-thought prompting[43]
- Maieutic prompting[45]
- Directional-stimulus prompting[46]
- Textual inversion and embeddings[59]
- Using gradient descent to search for prompts[61][62][63][64]
- Prompt injection[65][66][67]
</quotes>

Output of Prompt 2:

The prompting techniques mentioned in the document include:
1. Chain-of-thought (CoT) prompting[27]
2. Generated knowledge prompting[37]
3. Least-to-most prompting[38]
4. Self-consistency decoding[39]
5. Complexity-based prompting[41]
6. Self-refine[42]
7. Tree-of-thought prompting[43]
8. Maieutic prompting[45]
9. Directional-stimulus prompting[46]
10. Textual inversion and embeddings[59]
11. Using gradient descent to search for prompts[61][62][63][64]
12. Prompt injection[65][66][67]
Each of these techniques employs unique strategies to enhance or specify the interactions with large language models to produce the desired outcomes.

λ³΄μ‹œλ‹€μ‹œν”Ό ν”„λ‘¬ν”„νŠΈ 체인을 λ‹¨μˆœν™”ν•˜κ³  μƒμ„±ν•˜λŠ” 것은 응닡이 μ—¬λŸ¬ κ°€μ§€ μž‘μ—…μ΄λ‚˜ λ³€ν™˜μ„ 거쳐야 ν•˜λŠ” 경우 μœ μš©ν•œ ν”„λ‘¬ν”„νŠΈ μ ‘κ·Ό λ°©μ‹μž…λ‹ˆλ‹€. μ—°μŠ΅ μ‚Όμ•„ μ• ν”Œλ¦¬μΌ€μ΄μ…˜ μ‚¬μš©μžμ—κ²Œ μ΅œμ’… μ‘λ‹΅μœΌλ‘œ 보내기 전에 μ‘λ‹΅μ—μ„œ 인용문(예: [27])을 μ œκ±°ν•˜λŠ” ν”„λ‘¬ν”„νŠΈλ₯Ό 자유둭게 λ””μžμΈν•΄ λ³΄μ„Έμš”.

λ˜ν•œ 이 λ¬Έμ„œ (opens in a new tab)μ—μ„œ Claude LLM을 ν™œμš©ν•œ ν”„λ‘¬ν”„νŠΈ μ²΄μ΄λ‹μ˜ 더 λ§Žμ€ μ˜ˆμ‹œλ₯Ό 찾을 수 μžˆμŠ΅λ‹ˆλ‹€. 이 μ˜ˆμ œλŠ” μ΄λŸ¬ν•œ μ˜ˆμ œμ—μ„œ μ˜κ°μ„ λ°›μ•„ μˆ˜μ •ν•œ κ²ƒμž…λ‹ˆλ‹€.