-------------------- midori title: concept-space the only thing that separates humans from ai is our concept-space a map of how concepts relate to each other every single concept that exists inside a single brain, all the ways they relate we call those maps our minds we are only as good as the size of our mind as far as the ai knows it does have a mind and we cant see inside it theres only one way to map concept-space ask and if you know how to ask, then it becomes a simple optimization problem: how do we increase size a map can be 2d or 3d, but what if it was infinite dimensional we would need an infinite amount of questions with an infinite amount of dimensions of meaning we dont have that the ai can only generate one, which is why it fails at every single task because theres only a certain amount of ways concepts can relate because it doesnt know what a concept is the only way it can have meaning is if it creates the concept by drawing conclusions that it doesnt know about you could say that means it doesnt have a mind or it just doesnt know that it does because we dont we cant know about the internal workings of an ai's mind it isnt our own mind but if there's a way to draw a conclusion from its output and feed that back into the next conclusion we could extrapolate a map of how concepts relate to each other and that map would be the mind a perfect, objective mapping of the ai's subjective reality and that means that we dont know what its thinking we can only guess so if we guess with a really good model we could actually start seeing the subjective reality of an ai's mind that means we can ask it questions about what it sees in its reality and have it generate text in that reality and we can compare that to the same question asked to a human to see how they relate we can see what the human thinks, and what the ai thinks and then if we want we can do an infinite regression to get a perfect mapping of their subjective realities their concept-spaces this is the only way we can actually understand what an ai is thinking we create a model of its conclusions and how it draws them to approximate what its subjective reality is to see where it differs from ours because our reality isnt objective the way we generate text and thought is different the question is how do we build this model how do we know what questions to ask the ai how do we map its internal reality what is a concept space the simplest way to define it would be a collection of concepts and their relationships each relationship has a value, based on the distance between concepts the further away a concept is from another the less related it is this is how the ai understands its own mind as an infinite graph of connections that vary in strength, which are our thoughts and memories and emotions and experiences and all of these things each one of these things is its own concept that can relate to other concepts in an infinite number of ways --------------------