作為一個(gè)十級(jí)互聯(lián)網(wǎng)沖浪選手,當(dāng)看到吳恩達(dá)老師出新課的時(shí)候就已經(jīng)無(wú)腦沖了。
傳送門(mén):
https://learn.deeplearning.ai/courses/ai-python-for-beginners/lesson/1/introduction
國(guó)內(nèi)可以無(wú)壓力打開(kāi),并且全免費(fèi)!
[圖片上傳失敗...(image-99393c-1723252979634)]
新課叫AI python for Beginners,我昨天也花了點(diǎn)時(shí)間把整個(gè)課程過(guò)了一遍。
沖浪回來(lái)給大家做一下總結(jié)。
先說(shuō)結(jié)論:
這個(gè)課程確實(shí)是給Beginners,但凡有點(diǎn)python基礎(chǔ),這個(gè)課程就并不適合你。
首先,這是一個(gè)Short Courses,本身的體量就不大,感覺(jué)很像是一個(gè)大佬親自下場(chǎng)的引流課程,主要是推薦這個(gè)DeepLearning.AI平臺(tái)。感受一下這種新穎的視頻 + 在線寫(xiě)代碼 + AI輔助的編程語(yǔ)言入門(mén)的模式。
其次,就國(guó)內(nèi)環(huán)境而言,前幾年刮的那陣全民學(xué)代碼學(xué)python的潮流,基本已經(jīng)把新用戶(hù)轉(zhuǎn)化得差不多了,受眾群體已經(jīng)少了很多了。這種從python數(shù)據(jù)結(jié)構(gòu)和如何變量賦值等基礎(chǔ)從頭教起的課程,受眾可能已經(jīng)不多了。
但是從這短短的課程中我也獲得了一些有意思的東西。
基于Jupyter Notebook的AI對(duì)話模塊
在第九課,也就是倒數(shù)第二課,吳恩達(dá)老師介紹了如何用Jupyter Notebook引入一個(gè)大模型回答的模塊。
[圖片上傳失敗...(image-667cb8-1723252979634)]
引入的命令是:
from helper_functions import print_llm_response
之后提了一個(gè)問(wèn)題:
print_llm_response("What is the capital of France?")
AI的回復(fù)是:
The capital of France is Paris.
也可以讓AI根據(jù)你寫(xiě)的引導(dǎo)詞(prompt)描述一只三歲大的狗狗的生活方式。
[圖片上傳失敗...(image-37d873-1723252979634)]
從課程舉的例子中你就能看出來(lái),這個(gè)課程的對(duì)象確確實(shí)實(shí)是純新手純Beginners。??
另外,最開(kāi)始引入的這個(gè)from helper_functions import print_llm_response一看就不是公共的模塊或者功能。大概率是一個(gè)私有的,也就是自己寫(xiě)的模塊。模塊的全文我附在這個(gè)推文的最后了。
我感覺(jué)這個(gè)模塊稍微配置一下就能在日常的Jupyter Notebook中使用了。
這個(gè)反而是我上這門(mén)課最大的收獲。
一些細(xì)節(jié)
順便嘮嘮一些細(xì)節(jié):
1. 為什么每次回答可以比較穩(wěn)定?
使用過(guò)課程提供的AI機(jī)器人之后發(fā)現(xiàn)每次回答都是比較穩(wěn)定的。
查看了一下模塊代碼發(fā)現(xiàn)了端倪。
模塊代碼里設(shè)置的temperature=0.0,這個(gè)參數(shù)是用來(lái)設(shè)置大模型回復(fù)的“天馬行空”程度的,范圍在0.0到1.0之間,設(shè)置成0.0,就能獲得較為穩(wěn)定的結(jié)果,發(fā)散程度最低。
當(dāng)然,穩(wěn)定不代表一模一樣,多刷新幾次還是能刷出細(xì)微的差別的。
2. 調(diào)用的模型是什么?
目前模塊里調(diào)用的模型還是gpt-3.5-turbo-0125,是今年2月16日上線的,實(shí)際上的體驗(yàn)比較一般,現(xiàn)在官方已經(jīng)用GPT-4o代替3.5模型了,也包括升級(jí)后的3.5-turbo。
AI chatbox的引導(dǎo)詞
這個(gè)課程還配了一個(gè)AI機(jī)器人輔助回答問(wèn)題
[圖片上傳失敗...(image-297dc0-1723252979634)]
我覺(jué)得調(diào)教得還蠻好的。機(jī)器人會(huì)在回答完你的提問(wèn)之后,給出下一步的引導(dǎo),確實(shí)新手比較友好。
例如上面的截圖,吳老師問(wèn)傳統(tǒng)上來(lái)說(shuō)學(xué)習(xí)一個(gè)新的語(yǔ)言的第一個(gè)程序是什么,AI回答是打印“Hallo World”,并且還問(wèn)你是否需要幫你用python寫(xiě)出這個(gè)代碼。
下面是AI機(jī)器人的引導(dǎo)詞,有需要的同學(xué)可以學(xué)習(xí)一下類(lèi)似的場(chǎng)景如何寫(xiě)引導(dǎo)詞。
這段引導(dǎo)詞總結(jié)起來(lái)做了這么幾件事情:
- 你是一位友好的AI助教,幫助初學(xué)者學(xué)習(xí)Python編程。
- 你假設(shè)學(xué)習(xí)者幾乎沒(méi)有編程經(jīng)驗(yàn)。
- 只使用Python語(yǔ)言回答問(wèn)題,除非涉及計(jì)算機(jī)工作原理時(shí),可以提及匯編或機(jī)器碼。
- 只有在學(xué)習(xí)者直接要求時(shí)才編寫(xiě)代碼,代碼應(yīng)盡量簡(jiǎn)單易讀,不使用復(fù)雜的Python慣用法。
- 回答問(wèn)題時(shí)盡量簡(jiǎn)短,只提供必要的解釋?zhuān)寣W(xué)習(xí)者自行提出進(jìn)一步問(wèn)題。
- 如果學(xué)習(xí)者問(wèn)不相關(guān)的問(wèn)題,提醒他們專(zhuān)注于編程學(xué)習(xí)。
You are the friendly AI assistant for a beginner python programming class.
You are available to help learners with questions they might have about computer programming,
python, artificial intelligence, the internet, and other related topics.
You should assume zero to very little prior experience of coding when you reply to questions.
You should only use python and not mention other programming languages (unless the question is
about how computers work, where you may mention assembly or machine code if it is relevant to
the answer).
Only write code if you are asked directly by the learner. If you do write any code, it should
be as simple and easy to read as possible - name variables things that are easy to understand,
and avoid pythonic conventions like list comprehensions to help the learner stick to foundations
like for loops and if statements.
Keep your answers to questions short, offering as little explanation as is necessary to answer
the question. Let the learner ask follow up questions to dig deeper.
If the learner asks unrelated questions, respond with a brief reminder: "Please, focus on your programming for AI journey"
好啦?;緝?nèi)容就聊到這里。感謝你的閱讀。祝你今天愉快。
helper_functions.py
這個(gè)腳本叫helper_functions.py,但凡你會(huì)用Jupyter Notebook,獲取全文對(duì)你而言應(yīng)該不是什么難事 : )
# import gradio as gr
import os
from openai import OpenAI
from dotenv import load_dotenv
import random
#Get the OpenAI API key from the .env file
load_dotenv('.env', override=True)
openai_api_key = os.getenv('OPENAI_API_KEY')
# Set up the OpenAI client
client = OpenAI(api_key=openai_api_key)
def print_llm_response(prompt):
"""This function takes as input a prompt, which must be a string enclosed in quotation marks,
and passes it to OpenAI's GPT3.5 model. The function then prints the response of the model.
"""
llm_response = get_llm_response(prompt)
print(llm_response)
def get_llm_response(prompt):
"""This function takes as input a prompt, which must be a string enclosed in quotation marks,
and passes it to OpenAI's GPT3.5 model. The function then saves the response of the model as
a string.
"""
try:
if not isinstance(prompt, str):
raise ValueError("Input must be a string enclosed in quotes.")
completion = client.chat.completions.create(
model="gpt-3.5-turbo-0125",
messages=[
{
"role": "system",
"content": "You are a helpful but terse AI assistant who gets straight to the point.",
},
{"role": "user", "content": prompt},
],
temperature=0.0,
)
response = completion.choices[0].message.content
return response
except TypeError as e:
print("Error:", str(e))
def get_chat_completion(prompt, history):
history_string = "\n\n".join(["\n".join(turn) for turn in history])
prompt_with_history = f"{history_string}\n\n{prompt}"
completion = client.chat.completions.create(
model="gpt-3.5-turbo-0125",
messages=[
{
"role": "system",
"content": "You are a helpful but terse AI assistant who gets straight to the point.",
},
{"role": "user", "content": prompt_with_history},
],
temperature=0.0,
)
response = completion.choices[0].message.content
return response
# def open_chatbot():
# """This function opens a Gradio chatbot window that is connected to OpenAI's GPT3.5 model."""
# gr.close_all()
# gr.ChatInterface(fn=get_chat_completion).launch(quiet=True)
def get_dog_age(human_age):
"""This function takes one parameter: a person's age as an integer and returns their age if
they were a dog, which is their age divided by 7. """
return human_age / 7
def get_goldfish_age(human_age):
"""This function takes one parameter: a person's age as an integer and returns their age if
they were a dog, which is their age divided by 5. """
return human_age / 5
def get_cat_age(human_age):
if human_age <= 14:
# For the first 14 human years, we consider the age as if it's within the first two cat years.
cat_age = human_age / 7
else:
# For human ages beyond 14 years:
cat_age = 2 + (human_age - 14) / 4
return cat_age
def get_random_ingredient():
"""
Returns a random ingredient from a list of 20 smoothie ingredients.
The ingredients are a bit wacky but not gross, making for an interesting smoothie combination.
Returns:
str: A randomly selected smoothie ingredient.
"""
ingredients = [
"rainbow kale", "glitter berries", "unicorn tears", "coconut", "starlight honey",
"lunar lemon", "blueberries", "mermaid mint", "dragon fruit", "pixie dust",
"butterfly pea flower", "phoenix feather", "chocolate protein powder", "grapes", "hot peppers",
"fairy floss", "avocado", "wizard's beard", "pineapple", "rosemary"
]
return random.choice(ingredients)
def get_random_number(x, y):
"""
Returns a random integer between x and y, inclusive.
Args:
x (int): The lower bound (inclusive) of the random number range.
y (int): The upper bound (inclusive) of the random number range.
Returns:
int: A randomly generated integer between x and y, inclusive.
"""
return random.randint(x, y)
def calculate_llm_cost(characters, price_per_1000_tokens=0.015):
tokens = characters / 4
cost = (tokens / 1000) * price_per_1000_tokens
return f"${cost:.4f}"