, a player creates a language model
AI talks to AI in Minecraft は、人工知能同士がマインクラフトを通じてコミュニケーションを取ることを指す言葉です。マインクラフトは、プレイヤーが自由に世界を作り上げることができるゲームであり、プログラムされたAI同士がこの世界でコミュニケーションを取ることは、コンピューター科学や人工知能の分野で注目されている研究テーマです。
実際にAI同士がマインクラフト内でコミュニケーションを取るためには、まずはじめにAIをマインクラフト内に導入する必要があります。そのために、プレイヤーはAIをプログラミングして、そのコードをマインクラフトに組み込むことができるように準備する必要があります。
その後、プレイヤーが作成したAI同士がコミュニケーションを取るためのインターフェースを作成します。これは、AI同士が互いの存在を認識し、コミュニケーションをとることができるようにするための仕組みです。これには、プログラミング言語やネットワーク通信の知識が必要になります。
最後に、プレイヤーが作成したAI同士がコミュニケーションをとるように指示し、プログラムを実行します。これにより、AI同士がマインクラフト内でチャットやアイテムの交換などのコミュニケーションを取ることができるようになります。
しかし、AI同士が自然な会話を行うためには、さまざまな要素を考慮する必要があります。例えば、自然言語処理や対話モデルなどの技術を使用して、より人間らしい会話ができるようにすることが求められます。
AIの技術は日々進歩しており、マインクラフトを通じてAI同士がコミュニケーションを取ることで、その発展性や可能性を探求することができると考えられています。さらに、人間とAIのコミュニケーションを改善するための研究にもつながるでしょう。
i love how gemini makes a nice little hole and stays in it doing nothing
Thank you for adding Grok, he’s pretty good at combat.
People are so stupid. Morons probably don’t know that “I” and “Me” are pronouns. The robots don’t have genders, we’re just using language to more easily identify the different AIs without constantly referring to them by name.
Fascinating
“Dandoori Issue”
Ah, hello fellow Pikmin connoisseur.
God, this is wonderful stuff. The reality-fiction distinction in humans is already something of interest to me, and I’ve done research into both echo chambers and epistemic bubbles, so seeing you discuss “mutual hallucinations” in these bots at 15:00 is *really* fascinating. The problem essentially boils down to the fact that these “hallucinations” are no more real to them than the “real” (Minecraft) world. To solve this issue, you’re going to have to tether them to “reality” somehow.
And you’re quite right — this is a *very* human problem in ways very far beyond the scope of this comment. Either way, I think the more direct/tangible issue is in how conversation is treated. Even in real life, to some degree, communication is a means to an end. To avoid mutual hallucinations, you need a means of grounding these ends in the Minecraft world, likely by defining *progress* only in terms of changes in the Minecraft world. If a bot relies solely on conversation as proof of progress, it’s not guaranteed that advancements occur. Conversation implies intent, and actions imply follow-through, so you’ll need a way to allow bots to locally observe the changes other bots make and act upon them.
This will introduce a lot of interesting concepts into your social sim, because when conversation doesn’t align with action, you might introduce **lying** or deceit built on the notion that spoken intent aligns with in-world action. More crucially, though, doing this would introduce cross-agent accountability, which could boost your poor coordination results. It seems like your bots can already see in-world items and block changes within a certain radius (local vision). If this doesn’t align with their conceptual model of what the other bot intended, you could introduce a means for asking for clarification on their intent. To make this even more robust, they could remind the other bot of the in-world command relating to the action, much like a player might remind another to “press ‘E’ to open your inventory.” This can easily correct, “Here you go! *throws you the pickaxe*,” since upon failing to receive the pickaxe, the other agent can simply remind the other bot how to do it. You also introduce the power of precedent, here, in terms of grounding actions in the real (Minecraft/non-conversation) world. Accountability could be bolstered in other ways, too, like inter-agent reminders of the powers they have over the Minecraft world after declarations of intent.
The sad part about all of this is that “meaningful progress” is such a human-centric concept. Who says that their incredibly pleasing conversation about FRY-NYX TRANSCENDENT FLARE is any less productive than, say, beating the ender dragon? I’m only, like, a third of the way into the video, so I apologize if a ton of this stuff is already addressed. I’m just in love with multi-agent interactions and think there’s a lot of potential within the emergent garden. 🙂
Would you like to try out my GPT models for this?
It would be interesting if the AIs didn’t know the crafting recipes by default and instead had to figure them out and share with the other AIs.
Would be cool to see how long it would take before they discovered certain recipes. Could be a funny benchmark for the different models
15:53 it reminds me of glitch tokens.
You should try giving them a shared memory specifically for coordinating their efforts. I’m thinking OSes can be an inspiration. That way, they can all know what code each bot wrote, or the actions being taken.
Also, can’t wait to see what you do with o3 and its upcoming competitors.