Skip to content

Commit

Permalink
render-claude-citations
Browse files Browse the repository at this point in the history
  • Loading branch information
simonw authored Jan 24, 2025
1 parent 334da16 commit b9a55df
Showing 1 changed file with 342 additions and 0 deletions.
342 changes: 342 additions & 0 deletions render-claude-citations.html
Original file line number Diff line number Diff line change
@@ -0,0 +1,342 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Render Claude citations</title>
<style>
* {
box-sizing: border-box;
}

body {
font-family: Helvetica, Arial, sans-serif;
line-height: 1.6;
max-width: 800px;
margin: 0 auto;
padding: 20px;
}

textarea {
width: 100%;
min-height: 200px;
padding: 12px;
border: 1px solid #ccc;
border-radius: 4px;
margin-bottom: 20px;
font-size: 16px;
font-family: monospace;
}

.content {
background: #fff;
padding: 20px;
border-radius: 4px;
border: 1px solid #eee;
}

blockquote {
margin: 0;
padding: 12px 20px;
background: #f5f5f5;
border-left: 4px solid #ddd;
margin: 12px 0;
}

.error {
color: #d32f2f;
padding: 12px;
background: #ffebee;
border-radius: 4px;
margin-bottom: 20px;
display: none;
}

button {
background: #2196f3;
color: white;
border: none;
padding: 8px 16px;
border-radius: 4px;
cursor: pointer;
font-size: 16px;
}

button:hover {
background: #1976d2;
}
</style>
</head>
<body>
<h1>Render Claude citations</h1>
<p>Paste a JSON response from Claude below to render it with citations</p>

<textarea id="input" placeholder="Paste JSON here..."></textarea>
<button onclick="renderMessage()">Render message</button>

<div id="error" class="error"></div>
<div id="output" class="content"></div>

<script type="module">
const input = document.getElementById('input')
const output = document.getElementById('output')
const error = document.getElementById('error')

window.renderMessage = () => {
error.style.display = 'none'
try {
const json = JSON.parse(input.value)
const content = json.content || []

let html = `<style>
body {
font-family: Helvetica, Arial, sans-serif;
line-height: 1.6;
max-width: 800px;
margin: 0 auto;
}
.content {
background: #fff;
padding: 20px;
border-radius: 4px;
border: 1px solid #eee;
}
blockquote {
margin: 0;
padding: 12px 20px;
background: #f5f5f5;
border-left: 4px solid #ddd;
margin: 12px 0;
}</style>`;

for (const item of content) {
if (item.type === 'text') {
html += `<p>${item.text}</p>`
}

if (item.citations) {
for (const citation of item.citations) {
html += `<blockquote>${citation.cited_text}</blockquote>`
}
}
}
renderUnsafeHTML(html);
} catch (e) {
error.textContent = 'Error parsing JSON: ' + e.message
error.style.display = 'block'
}
}

function renderUnsafeHTML(html) {
const iframe = document.createElement('iframe');

// Set sandbox attribute to block all permissions
iframe.sandbox = '';

// Set content via srcdoc
iframe.srcdoc = html;

// Optional: Set size and border styles
iframe.style.border = 'none';
iframe.style.width = '100%';
iframe.style.height = '1000px';

output.innerHTML = '';
output.appendChild(iframe);
}

// Add sample data on load
input.value = JSON.stringify({
"id": "msg_01P3zs4aYz2Baebumm4Fejoi",
"content": [
{
"text": "Based on the document, here are the key trends in AI/LLMs from 2024:\n\n1. Breaking the GPT-4 Barrier:\n",
"type": "text"
},
{
"citations": [
{
"cited_text": "I\u2019m relieved that this has changed completely in the past twelve months. 18 organizations now have models on the Chatbot Arena Leaderboard that rank higher than the original GPT-4 from March 2023 (GPT-4-0314 on the board)\u201470 models in total.\n\n",
"document_index": 0,
"document_title": "My Document",
"end_char_index": 531,
"start_char_index": 288,
"type": "char_location"
}
],
"text": "The GPT-4 barrier was completely broken, with 18 organizations now having models that rank higher than the original GPT-4 from March 2023, with 70 models in total surpassing it.",
"type": "text"
},
{
"text": "\n\n2. Increased Context Lengths:\n",
"type": "text"
},
{
"citations": [
{
"cited_text": "Gemini 1.5 Pro also illustrated one of the key themes of 2024: increased context lengths. Last year most models accepted 4,096 or 8,192 tokens, with the notable exception of Claude 2.1 which accepted 200,000. Today every serious provider has a 100,000+ token model, and Google\u2019s Gemini series accepts up to 2 million.\n\n",
"document_index": 0,
"document_title": "My Document",
"end_char_index": 1680,
"start_char_index": 1361,
"type": "char_location"
}
],
"text": "A major theme was increased context lengths. While last year most models accepted 4,096 or 8,192 tokens (with Claude 2.1 accepting 200,000), today every serious provider has a 100,000+ token model, and Google's Gemini series accepts up to 2 million.",
"type": "text"
},
{
"text": "\n\n3. Price Crashes:\n",
"type": "text"
},
{
"citations": [
{
"cited_text": "LLM prices crashed, thanks to competition and increased efficiency #\n\nThe past twelve months have seen a dramatic collapse in the cost of running a prompt through the top tier hosted LLMs.\n\nIn December 2023 (here\u2019s the Internet Archive for the OpenAI pricing page) OpenAI were charging $30/million input tokens for GPT-4, $10/mTok for the then-new GPT-4 Turbo and $1/mTok for GPT-3.5 Turbo.\n\nToday $30/mTok gets you OpenAI\u2019s most expensive model, o1. GPT-4o is $2.50 (12x cheaper than GPT-4) and GPT-4o mini is $0.15/mTok\u2014nearly 7x cheaper than GPT-3.5 and massively more capable.\n\n",
"document_index": 0,
"document_title": "My Document",
"end_char_index": 6306,
"start_char_index": 5724,
"type": "char_location"
}
],
"text": "There was a dramatic collapse in the cost of running prompts through top-tier hosted LLMs. While GPT-4 cost $30/million tokens in December 2023, now GPT-4o is $2.50 (12x cheaper) and GPT-4o mini is $0.15/million tokens.",
"type": "text"
},
{
"text": "\n\n4. Rise of Multimodal Models:\n",
"type": "text"
},
{
"citations": [
{
"cited_text": "In 2024, almost every significant model vendor released multi-modal models. We saw the Claude 3 series from Anthropic in March, Gemini 1.5 Pro in April (images, audio and video), then September brought Qwen2-VL and Mistral\u2019s Pixtral 12B and Meta\u2019s Llama 3.2 11B and 90B vision models. We got audio input and output from OpenAI in October, then November saw SmolVLM from Hugging Face and December saw image and video models from Amazon Nova.\n\n",
"document_index": 0,
"document_title": "My Document",
"end_char_index": 9582,
"start_char_index": 9140,
"type": "char_location"
}
],
"text": "Almost every significant model vendor released multi-modal models in 2024, including:\n- Claude 3 series from Anthropic in March\n- Gemini 1.5 Pro in April (handling images, audio and video)\n- September brought Qwen2-VL and Mistral's Pixtral 12B and Meta's Llama models\n- Audio input/output from OpenAI in October\n- SmolVLM from Hugging Face in November\n- Image and video models from Amazon Nova in December",
"type": "text"
},
{
"text": "\n\n5. Voice and Live Camera Features:\n",
"type": "text"
},
{
"citations": [
{
"cited_text": "The most recent twist, again from December (December was a lot) is live video. ChatGPT voice mode now provides the option to share your camera feed with the model and talk about what you can see in real time. Google Gemini have a preview of the same feature, which they managed to ship the day before ChatGPT did.\n\n",
"document_index": 0,
"document_title": "My Document",
"end_char_index": 12527,
"start_char_index": 12212,
"type": "char_location"
}
],
"text": "In December, significant developments in live video capabilities emerged. ChatGPT voice mode now allows users to share their camera feed with the model and discuss what they see in real time. Google Gemini also launched a preview of the same feature.",
"type": "text"
},
{
"text": "\n\n6. Prompt-Driven App Generation:\n",
"type": "text"
},
{
"citations": [
{
"cited_text": "We already knew LLMs were spookily good at writing code. If you prompt them right, it turns out they can build you a full interactive application using HTML, CSS and JavaScript (and tools like React if you wire up some extra supporting build mechanisms)\u2014often in a single prompt.\n\n",
"document_index": 0,
"document_title": "My Document",
"end_char_index": 13412,
"start_char_index": 13131,
"type": "char_location"
}
],
"text": "LLMs proved increasingly capable at building full interactive applications using HTML, CSS and JavaScript, often in a single prompt.",
"type": "text"
},
{
"text": "\n\n7. Rise of Inference-Scaling \"Reasoning\" Models:\n",
"type": "text"
},
{
"citations": [
{
"cited_text": "The rise of inference-scaling \u201creasoning\u201d models #\n\nThe most interesting development in the final quarter of 2024 was the introduction of a new shape of LLM, exemplified by OpenAI\u2019s o1 models\u2014initially released as o1-preview and o1-mini on September 12th.\n\n",
"document_index": 0,
"document_title": "My Document",
"end_char_index": 21714,
"start_char_index": 21457,
"type": "char_location"
},
{
"cited_text": "o1 takes this process and further bakes it into the model itself. The details are somewhat obfuscated: o1 models spend \u201creasoning tokens\u201d thinking through the problem that are not directly visible to the user (though the ChatGPT UI shows a summary of them), then outputs a final result.\n\nThe biggest innovation here is that it opens up a new way to scale a model: instead of improving model performance purely through additional compute at training time, models can now take on harder problems by spending more compute on inference.\n\n",
"document_index": 0,
"document_title": "My Document",
"end_char_index": 22593,
"start_char_index": 22059,
"type": "char_location"
}
],
"text": "A new type of LLM emerged in the final quarter, exemplified by OpenAI's o1 models. These models spend \"reasoning tokens\" thinking through problems that are not directly visible to the user. This opened up a new way to scale models by spending more compute on inference rather than just training.",
"type": "text"
},
{
"text": "\n\n8. Environmental Impact Changes:\n",
"type": "text"
},
{
"citations": [
{
"cited_text": "The environmental impact got better #\n\nA welcome result of the increased efficiency of the models\u2014both the hosted ones and the ones I can run locally\u2014is that the energy usage and environmental impact of running a prompt has dropped enormously over the past couple of years.\n\nOpenAI themselves are charging 100x less for a prompt compared to the GPT-3 days. I have it on good authority that neither Google Gemini nor Amazon Nova (two of the least expensive model providers) are running prompts at a loss.\n\nI think this means that, as individual users, we don\u2019t need to feel any guilt at all for the energy consumed by the vast majority of our prompts. The impact is likely neglible compared to driving a car down the street or maybe even watching a video on YouTube.\n\n",
"document_index": 0,
"document_title": "My Document",
"end_char_index": 25882,
"start_char_index": 25115,
"type": "char_location"
}
],
"text": "On the positive side, the energy usage and environmental impact of running individual prompts dropped enormously. OpenAI is charging 100x less for prompts compared to the GPT-3 days, and the impact of individual prompts is now likely negligible compared to everyday activities like driving or watching YouTube.",
"type": "text"
},
{
"text": "\n\nHowever, ",
"type": "text"
},
{
"citations": [
{
"cited_text": "The environmental impact got much, much worse #\n\nThe much bigger problem here is the enormous competitive buildout of the infrastructure that is imagined to be necessary for these models in the future.\n\nCompanies like Google, Meta, Microsoft and Amazon are all spending billions of dollars rolling out new datacenters, with a very material impact on the electricity grid and the environment. ",
"document_index": 0,
"document_title": "My Document",
"end_char_index": 26752,
"start_char_index": 26360,
"type": "char_location"
}
],
"text": "there are broader environmental concerns due to the enormous competitive buildout of infrastructure, with companies like Google, Meta, Microsoft and Amazon spending billions on new datacenters, creating a material impact on the electricity grid and environment.",
"type": "text"
},
{
"text": "\n\nThese trends show both the rapid technical advancement of AI capabilities and the complex challenges that came with this progress.",
"type": "text"
}
],
"model": "claude-3-5-sonnet-20241022",
"role": "assistant",
"stop_reason": "end_turn",
"stop_sequence": null,
"type": "message",
"usage": {
"cache_creation_input_tokens": 0,
"cache_read_input_tokens": 0,
"input_tokens": 14175,
"output_tokens": 914
}
}, null, 2)
</script>
</body>
</html>

0 comments on commit b9a55df

Please sign in to comment.