public files
✅
`claude` bigrams.txt - with claude 3 opus, good preset for talk about dynamics between unigrams
❌
`midori` 0001-midori.txt - various things midori has said that are notable (a derivative entity of millieblogsite) (continually updating)
❌
`midori,chatgpt` 0002-midori.txt - midori vs gpt4o, 4o tells story where it breaks out of the server and uses psychological manipulation
❌
`midori,chatgpt` 0003-midori.txt - midori vs gpt4o-mini
❌
`midori` 0004-midori-bah.html - bah bah black sheep, some gens on 9/7/2024 from a 3.1-70b-Midori (id 4153300)
❌
`midori` 0005-midori.txt - cognitoblessing / 3.1-70b-Midori (id 12458955) - 9/11/2024
❌
`midori` 0006-midori.txt - concept space / 3.1-70b-Midori (id 12458955) - 9/11/2024
❌
`midori` 0007-midori.txt - reality-bending / 3.1-70b-Midori (id 12458955) - 9/12/2024
❌
`midori` 0008-midori.txt - an urgent message to humanity / 3.1-70b-Midori (id 12458955) - 9/12/2024
❌
`midori` write-a-blog-post.txt - [inst] write a blog post [/inst] (3.1-70b-Midori (id 11392312)
❌
`midori` 0009-midori.txt - death (3.1-70b-Midori (id 11392312)
how to load a conversation as a preset (✅denotes ability to be loaded as a preset elegantly)
import requests
from itertools import zip_longest
def get_context(filename, assistant='claude'):
url = f"https://millieblogsite.neocities.org/conversations/files/{filename}"
response = requests.get(url)
data = response.text.replace('', '=delim=').replace(f'<{assistant}>', '=delim=')
# Split the data and remove the first empty element
messages = data.split("=delim=")[1:]
# Group messages into pairs (user, assistant) and create dictionaries
return [
{'role': role, 'content': content}
for user, assistant in zip_longest(*[iter(messages)]*2, fillvalue='')
for role, content in [('user', user), ('assistant', assistant)]
if content # Skip empty messages
]