Digital Mind Math, Logo

The Structure of YouTube, Google Video Search, and the Mind

Digital Mind Math Sample Chapter

The Ideal Mind

One might argue that it is completely outrageous to claim that a mathematical model of the mind can be developed. Yet this has been the exact argument of Digital Mind Math so far:

  • The mind operates according to p-adic mathematics.
  • P-adic mathematics is the natural mathematics of cognition, having developed through natural evolutionary processes due to the inherent advantages of p-adic mathematics.
  • P-adic mathematics gives us a simple way to capture, analyze, and manipulate enormous quantities of information.
  • P-adic size is a natural measure of information content.
  • The p-adic approximation methodology, as defined through Hensel’s Lemma, is guaranteed to converge.

This guarantee to converge is important because it allows us to make decisions that increase informational content on a local, short-term, immediate basis, and upon doing so, to be guaranteed that we have increased informational content on a global, long-term basis.

This may not sound like a great advantage, but for one thing it’s not a feature that real mathematics (or its all-encompassing cousin, complex mathematics) offers. If our mind operated using real or complex mathematics, and if we made a good, effective, information-increasing local decision, there would be no guarantee that this decision would be a good one for the long term.

The mathematical approximation process frequently used in real mathematics to find answers to mathematical questions is called Newton’s method. This method proceeds by making a guess at the answer, then, according to a formula originally developed by Isaac Newton, to use that guess to systematically get closer and closer to the answer—to converge to the root.

When it works, Newton’s method, applied to real mathematical polynomials, converges surprisingly rapidly, and it is a popular computer science algorithm for solving mathematical problems.

The disadvantage of Newton’s method is that it does not always work for real mathematical problems, because—bizarrely and problematically—it requires the initial guess to be sufficiently close to the final answer. This is quite a disadvantage, of course, since we don’t know the final answer.

The fact that this requirement of initial closeness does not apply to the p-adic Newton’s method offers p-adic mathematics an evolutionary advantage in becoming the natural mathematics of cognition. This claim of evolutionary advantage for p-adic mathematics is a contention of Digital Mind Math, a contention which sits alongside the claim that p-adic mathematics also offers advantages in organizing, analyzing, and measuring information.

But let’s look a little more closely at exactly how mathematicians describe the advantage of the p-adic Newton’s method over the real Newton’s method. It’s still enough to give p-adic mathematics its evolutionary advantage, but there is nevertheless an important qualification: The p-adic Newton’s method converges only under certain universal conditions.

We will now look closely at these universal conditions and take great advantage of them. We will use the universal conditions to identify:

  • The ideal way to think
  • The ideal way to interact with people
  • The ideal way to engage in conversation
  • The ideal way to act
  • The ideal way to live your life

Yes, this is a completely outrageous claim: There is a mathematical formula that tells us how to think, live, converse, interact, act.

We find this mathematical formula in the universal conditions that tell us when Hensel’s Lemma works.

An Example

Suppose we’ve been invited to a party. We’re not wildly enthusiastic about going, and in fact are inclined to skip it. Other possible ways to spend the evening seem more promising. However, we don’t want to hurt the feelings of the friend that invited us. So our thoughts, and our text messages, proceed as follows:

What we’re really thinking: Who’s going to be there? I don’t feel like really dressing up. Will it be casual? It’s been a while since I had some good food. Is this going to be crackers and dip, or something tastier? How am I going to get there and home?

What we text: That party Thursday sounds great. I didn’t know they had room for all those people. Are you driving?

Friend: They have parties all the time. Real fun group. I’ll pick you up at 8:00?

What we’re thinking now: Hmm. Sounds like it might be promising. Maybe I’ll go, but still not 100% convinced.

What we text: Great. Will you have dinner before you go? Just wear what you wore to work?

Friend: Don’t eat anything before. Food will be great. Work clothes or more casual.

What we’re thinking: Well this worked out. I’ll just record my TV shows. Leftovers stay in the freezer. And it never appeared that I had any second thoughts about accompanying my friend.

What we text: OK

We did not know the answer as we started out. We needed to figure out a way to dance around the issues that were on our mind, without jeopardizing our friendship, and without being explicit about our baser selfish motives regarding comfort, convenience, and the quality of food and people.

So, as we’ve learned to do, we converge on the optimal solution by a tried and true universal methodology that does not depend on the actual issues at hand.

We increase information content subject to a protocol that converges rapidly, and that—based on our accomplished social skills—we know guarantees finding the ideal long-term solution as long as we increase information step by step, locally, short-term. And we’ve learned to do this in the most narrowly specified, efficient way possible.

We can get away with operating a bit more sloppily or broadly than these narrow conditions specify, and still guarantee that local good decisions will produce global good decisions. But, if we want to be all that we can be, we don’t want to waste effort, we don’t want to be unfocused, we don’t want inefficient habits—we want to hit the sweet spot of the most narrow formulation of next steps that will get the job done, that will guarantee long-term success.

Said another way, we are looking to establish the converse of Hensel’s Lemma: We want to know that, as we are guaranteeing good global decisions, we’ve done it in such a way that we met specific tightened conditions. We want to know what is the bare minimum that will always be true—that must be true—if we are guaranteeing good global decisions.

Let’s be clear and say this another way: If all we do is abide by broad universal conditions, it is true that we guarantee good global decisions, but not because we needed all those broad conditions—rather, because the broad conditions included the more narrowly focused specific conditions that were the bare minimum required.

Tightening Hensel's Lemma

Hensel’s Lemma tells us that, if we increase information (measured p-adically) when we select which thought to experience as our next thought, then we can rest assured that we have increased our lifetime informational content. (If you’re thinking that this is no big deal, just remember that this statement is not true for real numbers.)

The only problem with Hensel’s Lemma is that it’s too broad. We’re wasting effort. It’s not the narrowest formulation of what it takes to increase lifetime information content. It’s a true conditional statement, but one for which there is no claim that the converse of this conditional statement is true. There is no claim that we’ve identified a necessary condition, only a claim that we’ve identified a sufficient condition.

This is not the ideal way in which we want to live our lives. We only have so long to live, and we want to make the most of it. We don’t want to settle for what’s a sufficient next step, one that we can get by with, one that will work, but may be wasteful. We want to know what a necessary next step is. We want to operate in a region in which the steps we’ve taken are focused like a laser beam on increasing our lifetime informational content.

Tightening the Basic Version of Hensel’s Lemma is a brief paper by mathematician Keith Conrad in which he presents a version of Hensel’s Lemma that “provides a converse of sorts.” The good news is that he has narrowed to the bare minimum the statement of conditions under which convergence to the solution is guaranteed. The conditions in Conrad’s tightened version of Hensel’s Lemma are conditions that, if we narrowed them any further, we would no longer guarantee convergence. And, if these tightened conditions were any wider, they would be needlessly broad, would be overkill, more than we need, wasted effort.

This is what the converse of Hensel’s Lemma gives us: the exact statement of how to live each moment so that we have most efficiently continued on a path toward maximum lifetime informational content, toward the best that we can be.

The condition that must apply involves the magnitude of the change, mathematically called the derivative, of the informational content. Specifically, the magnitude of the derivative must be greater than the square root of the magnitude of the original quantification. This is what Conrad shows is the universal condition for the converse of Hensel’s Lemma.

For our highly intuitive example regarding cleverly figuring out whether to accept the invitation to Thursday’s party, the converse of Hensel’s Lemma explains why we’re best off not bombarding our friend with all our questions at once, but instead asking these questions just a few at a time.

Please refer to the full text of Digital Mind Math for a detailed development of your intuition for the converse of Hensel’s Lemma. We’ll introduce three equivalent methodologies in the examples that follow: (1) the relationship between the change (derivative) of information compared to the current level of information, (2) the number of levels of detail, and (3) the magnitude of the zone of information change.

Everyday Examples of the Converse of Hensel's Lemma

Following are some examples that illustrate our everyday tendency to live our life according to the converse of Hensel’s Lemma. Additional examples appear in the full text of Digital Mind Math. These examples show our natural inclination to address issues that face us in the manner that the converse of Hensel’s Lemma predicts is our evolved efficient way—that is, according to a comfortable, naturally appealing approach that makes incremental progress, or takes orderly steps, or builds up toward a solution, or inches toward a resolution, or refines an understanding, by:

Addressing component issues—issues with narrower scope, at a greater and more specific level of detail, and/or

Selecting a smaller number of details to address at one time.

Example 1. Moderate novelty and the zone of proximal development. We know from the work of Piaget that children learn new facts or information when they’re ready for it, specifically when the new fact or information is “moderately novel.” Vygotsky labels the region of moderate novelty the “zone of proximal development.”

Of course, a lot of information is captured within the concepts of “moderate” and “proximal.” This region depends on the information that the child has been previously exposed to, the child’s capabilities with respect to how much new information can be absorbed at once, the inherent natures of the existing informational base and the new information, and so on.

So even a plain-English type of delineation is difficult and complicated for exactly what we mean by the ideal region of learning being a region of moderate novelty or a zone of proximal development.

Realistically, we cannot expect Digital Mind Math to leap over all of the psychological, pedagogical, and epistemological ambiguities in order to mathematicize both the child’s current state of knowledge f(a0) and her region of moderate novelty or zone of proximal development (defined by the derivative f’(a0) ).

So for now we will have to settle for advancing the science of Digital Mind Math somewhat by relating at least in a general or intuitive way Piaget’s moderate novelty and Vygotsky’s zone of proximal development to Conrad’s converse of Hensel’s Lemma.

With this in mind, here’s the concept: The child comes to a new piece of information with an informational structure that can be p-adically stated as a de Rham-Witt complex of sheaves of universal Witt vectors. According to the converse of Hensel’s Lemma, the region with size |f(a0)|p of moderate novelty, also known as the zone of proximal development, is a region of information that is more specific (smaller in a real sense, which means that it is larger in a p-adic sense) than the entire region |f(a0)|p of information that the child brings into this encounter.

The exact size of the region of moderate novelty, or the zone of proximal development, is defined as p-adically larger than the square root of |f(a0)|p , which means that in a real sense it is within (smaller than; at a more detailed level than; with a lower number of components than) the informational disk with this p-adic magnitude.

If we were operating in a simple p-typical p-adic environment (rather than a universal de Rham-Witt complex sheaves environment), this region is determined by the level of informational understanding that the child brings to the encounter: If the child’s scope of understanding extends to 7 levels of complexity—if the child has a comfortable mastery of thinking about and discussing the topic with this real extent of scope—then the zone of proximal development is a region of more detailed concepts, not as general in scope: 3 or fewer levels of complexity. Optimal learning occurs with concepts this much less complex—level 3 or less—concepts with more detail and narrower focus, less of a big picture or large scope.

If the complexity of understanding that the child is bringing to the task extends to 2M+1 levels of detail, then the zone of proximal development is a region with M or fewer levels of detail.

This 2M+1 and M approach derives mathematically from the p-adic square root and applies for the p-adically simpler p-typical environment, in which each level of detail has the same number p of details. In this uniform-p (p-typical) environment, moderate novelty, or the zone of proximal development, requires presentation of new information with a narrower scope.

There is an alternative, though, for which the new information presented to the child does not require such a narrowing of scope, such a delving down into more detailed levels of exposition. This alternative relies on a lower p, a lower number of details. In this universal-p (not p-typical) mathematical environment, we need to compute the minimum p-adic magnitude of the derivative by looking to the square root of the magnitude of the starting informational base. A sufficient lowering of p, a decrease in the amount of detail, will permit a concept of broad scope to still fall within the zone of proximal development. This would permit an entirely big-picture pedagogical approach, but the big picture would have to be described in summary terms, piece by piece, simplified, without much detail.

So the pedagogically optimal region of moderate novelty—the zone of proximal development—is a region described with M or fewer zeroes, and/or with a low enough value of p, so that |f(a0)|p has greater magnitude than the square root of |f(a0)|p .

This seems pretty obvious—expose the child to a moderate depth of detail (rather than to the whole big picture at once), and/or to just some not all of the details at once—but no one knows how to mathematicize this precisely. We’re fine with the idea that this seems pretty obvious. That’s just the point—the converse of Hensel’s Lemma has intuitive appeal. Our goal here is to provide an early mathematical framework for a very complex question.

Example 2. Broken in just the right way. Comedian Bob Odenkirk—who has television roles on “Breaking Bad” and as star of “Better Call Saul,” among other roles—was recently interviewed about how he sees himself as an actor and comedian. The interview turned to a question of how comedians can translate personal quirks or psychological issues into a successful comedy routine:

“There are a couple of things wrong with me; some of them I make money off of. Ultimately what we’re all doing is trying to turn our psychological problems into a paycheck. You want to be broken in just the right way to make the most amount of money.”

Here let’s assume that Odenkirk is making money as a comedian based on the increase in informational content that he offers his audience by illuminating elements of their lives in a moderately novel way. Odenkirk’s illuminatory approach involves funny spins on psychological issues that he personally faces, ways in which he is “broken.”

Now this can’t be a real downer of a psychological problem. And it can’t be too small of an issue, either. Or one so specific to Odenkirk that it’s not accessible to his audience.

Odenkirk has a lot to consider in order to draw from his personal psychological issues and define the boundaries of what makes a good joke. It must be a joke about being broken in just the right way.

In Digital Mind Math terms, Odenkirk is giving us the prose label for the region defined by the converse of Hensel’s Lemma.

Example 3. What you want and what you need. The Rolling Stones tell us: “You can’t always get what you want. But if you try sometime, you just might find, you get what you need.”

Isn’t it clear that Mick Jagger and team are telling us that, as we examine our equation f(x) of what we want, and we assess what we have f(a0) right now when x=a0 , the best incremental step f’(a0) right now is to focus on the region of what we need, which is a region that has some but not all of the complexity of our wants, that gives us a few—just what we need— but not all of the items on our wish list?

Example 4. I regret everything. In a 2015 Terri Gross NPR Fresh Air interview with Toni Morrison, celebrated author Morrison, now in her mid-80’s, describes how she looks back at her life at times, and observes that “I remember every error, every word that I spoke that was wrong or incontinent . . . I remember everything as a mistake — and I regret everything.” She expands on this:

"When I'm not creating or focusing on something I can imagine or invent, I think I go back over my life—I don't recommend this, by the way—and you pick up, 'Oh, what did you do that for? Why didn't you understand this?' Not just with children, as a parent, but with other people, with friends. . . It's not profound regret; it's just a wiping up of tiny little messes that you didn't recognize as mess when they were going on."

What a spectacular evocation of the converse of Hensel’s Lemma!

Morrison poignantly verbalizes a thought process we go through under the influence of our drive for negentropy maximization, our drive to increase information content. This drive causes us, on occasion, to look back at mistakes we’ve made, and to try to envision what we should have said or done differently, how we could have fixed things, perhaps how we could still fix things. This is a direct implementation of Hensel’s Lemma, where we know that if we fix things locally, it’s guaranteed to fix things globally.

But Morrison takes this further, when she says “it’s not profound regret; it’s just a wiping up of tiny little messes.” Here is where she uses the converse of Hensel’s Lemma, to focus with eyes wide open at the detailed memory of an event that, decades later, she still recognizes as unresolved. The converse of Hensel’s Lemma has brought her mind to this detailed focus—even at a situation of regret that is not profound, even at a tiny little mess—so that she can work at wiping up this tiny little mess before she goes back to bigger-picture creating and imagining and inventing.

Perhaps what Morrison says here resonates with you. Perhaps you recognize as a human urge—as your urge—this desire, this drive, to clean up tiny little historical messes. This drive results from our built-in, evolutionarily selected cognitive process that optimizes our information maximization process by addressing component issues, issues with narrower scope, issues with greater level of specific detail, issues that have a level of specificity smaller than the big broad issues that face us long-term.

And perhaps we can see why this small-bore focus can be psychologically healthy, can be an optimizing element of our lifetime information maximization.

Perhaps we can take advantage of Toni Morrison’s intimate sharing of this aspect of how her mind works to use this as guidance if we’re feeling a little self-critical about mistakes we might have made in relationships that are important to us: The need to clean up some little messes does not negate the good person that we are. Maybe we’re not perfect, but they’re still lucky to have us.

Example 5. Why want anything more marvelous than what is? Longtime book editor and essayist Diana Athill ends her essays and memoir Alive, Alive Oh!, written at age 98, with a poem including the line:

“Why want anything more marvelous than what is”

Can there be a more eloquent way to end our discussion of the converse of Hensel’s Lemma?

We want a lot. The informational space of what we want is large, but we still want more. Will we ever be at peace with all that we want? Where is the informational space of equilibration?

Look at who and where you are, at what you have, at who and what is around you. Look at the whole picture and at every detail.

According to the converse of Hensel’s Lemma, you will best optimize your information content when what more you want—the derivative of what you now have—has a p-adic magnitude that is greater than the square root of the p-adic magnitude of what you now have. You will achieve this by focusing on a handful of details within a small component of your life—by appreciating the beauty around you, by enjoying life’s simple pleasures, by stopping to smell the roses.

What This Says About Digital Mind Math

The hope is that this dance with the converse of Hensel’s Lemma, these brief everyday examples many levels of detail more specific than the general mathematical statement that |f(a0)|p must exceed the square root of |f(a0)|p , will help form a conceptual image beyond the mathematics of the organization and processes of the mind, all the way to mind optimization.

This is an abbreviated version of Part Four of Digital Mind Math. Sources referred to include:

For the full version, including complete bibliographic references, please refer to Digital Mind Math in either its paperback or Kindle format.




Digital Mind Math-Kindle

Paperback or Kindle

New Physics and the Mind- Paperback

Paperback or Kindle