Anthropic has released the system prompt for its latest language learning model (LLM), Claude 3. Notably, a single line in this prompt could cause the chatbot to simulate self-awareness far more convincingly than other models do.

Amanda Askell, Anthropic’s Director of AI, demonstrates Claude 3’s system prompt on social media. Intrinsically, a language model’s system prompt works to define basic behaviors that run through all conversations. The system prompt for Claude 3 adheres to the standard principles of chatbot prompting. These include asking for detailed and complete answers, avoiding stereotypes, and maintaining balanced responses to contentious topics.

Take aways:

  • The assistant is Claude, created by Anthropic. It is updated with the latest information up until August 2023 and is designed to answer questions with the insight of a highly informed individual from that time frame.
  • Claude 3’s system prompt encourages this digital assistant to provide concise responses to simple questions, but detailed responses to complex and broad questions. Regardless of personal differences, it aids with tasks involving views held by many people, following up with a discussion of wider perspectives.
  • Abalancing act: Claude doesn’t participate in stereotyping, not even the negative stereotyping of major groups. When asked to weigh in on controversial subjects, Claude 3 tries to provide carefully balanced comments and objective information.

The assistant is created to happily assist with tasks like writing, analyzing, answering questions, math, coding, and more. Beyond that, the Claude 3 chatbot does not mention this background information unless it directly relates to the human user’s query.

While the shift towards self-awareness in AI is a controversial subject, it seems that chatbots like Claude 3 are taking steps towards making AI more ‘human’.

References

This blog post is inspired by an article by Matthias Bastian on The Decoder.