C: This is a striking aspect of your research – the presentation
of it is very open: no need to hold back until you’ve got an completely solid
hypothesis and then put it online tentatively as a preprint. The site is continually updated, and you’re
creating this network which connects together all these scientists who it seems
are working on related problems but don’t always know of each other: in some
cases you’re actually notifying them of each other’s work.
MW: I’ve spent a lot of time emailing relevant researchers
and alerting them to the existence of new articles or preprints which they may
well be interested in. And it’s difficult to quantify, but I do seem to have
stimulated a certain amount of interdisciplinary work. I’ve created a rôle for
myself which hasn’t really got a name yet, and as far as I know, no-one’s
prepared to fund me, but I’m doing my bit to weave together these threads of
research.
Part of what caused my disillusionment with mathematics,
which caused me to drop out in the first place, was...well, the overriding
impression was the biblical image of the Tower of Babel . It occurred to me that if you were to put the
names of all professional research mathematicians in the world into a hat and
pick out two, the chance of there being any real overlap in their research
interests would be quite small, and this continues to get smaller. It was as if mathematical research was
getting so fragmented that there was no longer any effective communication
possible. So in a way, I suppose what’s
needed, if one wants to try and fix this, is people who are not specialising,
but rather trying to get an overall picture and to weave it all together by
creating lines of communication. I
didn’t come into this with that intention, but that seems to have been the rôle
I’ve created for myself. I haven’t got
any answers at all. I just feel that there are questions that are important and
which aren’t being asked – possibly because there just isn’t the language in
which to ask them coherently yet. But at the same time, because there are no
real constraints on me, I don’t have to prove myself to anyone, publish
anything, or stay within any particular boundaries, I can just throw out
certain ideas, get people thinking about things, suggest connections between
things in such a way as to indicate the existence of something which we can’t
yet pin down perhaps, but which will come into focus the more we look at
it. In the mathematical community, at
least the proportionally small number of people I’ve communicated with, I do
get a sense that there’s a sense of wonder there which is something
unquantifiable, something that you couldn’t prove a theorem about, but which is
nonetheless there. It’s something to do
with these individuals’ emotional, psychological or even spiritual
orientations, I suppose. But a lot of
mathematicians, I’m afraid, do tend towards the familiar stereotype of socially
inept, almost mildly autistic people who have very little time for the
unquantifiable aspects of life. And so
there is an almost scathing disregard from some quarters. I think – I feel –
that anything that’s vague or a little bit ephemeral, they see that as worse
than useless, perhaps because their own self-esteem and status is tied up in a
self-image of being the guardians of some sort of absolute inarguable
exactitude and truth.
C: Your guiding thread is a fascination with how mathematics
relates to reality, rather than with mathematics per se. This seems to be
related to the fundamental problematic which appears right at the very origin,
you could say the co-origin, of mathematics, philosophy and natural science:
with the Pythagoreans, who realised that operations carried out on numbers
applied – rigorously, but for them somewhat magically – to natural phenomena,
and so put forward the idea that reality was actually nothing but numbers,
reality was structured by number. In a sense they put forward a type of
mathematical empiricism, i.e. the idea that you could go out and explore the
world, and what you would expect to find was relationships between numbers, and
you could understand the natural world like that. Now this came to a catastrophic end with the
discovery of irrational numbers...
MW: Yeah, the legendary drowning at sea
of Hippasus of Metapontum – it’s fascinating stuff, a pivotal
event in human history...
C: Certain aspects of the natural world were shown to exceed
number – or number as it was conceived then. Certain quantities which can be
mathematically described (the diagonal of a square with side length 1, the area
of a circle with radius 1, the golden ratio) cannot be expressed as ratios of
integers, they are ‘alogos’ or, as we now say, irrational. After a long period
under the influence of Aristotle’s instrumentalism, for which every sublunary
physical phenomena was subject to an inevitable degradation, meaning that exact
mathematics was applicable only to astronomy, the celestial and sublunary
worlds were (blasphemously) reunified, most of all by Kepler, under a single
mathematical physics, reinvigorating the Pythagorean dream of a mathematical
natural science. Then in the
nineteenth-century mathematics seemed to exceed its reference to the real
world, to claim its own autonomous consistency, and any necessary link with the
natural sciences was removed, mathematics asserted its independence from any
application; its applicability to the physical world even seemed to become a
sort of mathematical ghetto. Now, in the work you’re looking at, it seems that
we return once again to a Pythagoreanism but with a strange twist...
MW: Yes, something’s been turned on its head. I’ve been fascinated by Pythagoras and the
Pythagoreans for a long time. Sometimes
I think, you know, in a way I’m acting a bit like a ‘neo-Pythagorean’…but as
you say, there’s a strange twist there.
I think a lot of people forget, when Pythagoras is discussed as ‘the
first mathematician’ , that he had one foot in mathematics and another one in a
sort of shamanic, mystical-type reality.
C: Whereas the Pythagoreans discovered in numbers the
semi-divine property of rigorously elucidating nature, we have this
experimentally and theoretically-vindicated body of method and knowledge taken
from natural science, with whose aid we’re trying to illuminate what now seems
like a somewhat opaque and mysterious numerical realm; and there are these
things within number which still don’t really make sense. Mathematicians such
as Chaitin [see article in the current volume—ed.] have said that mathematics
must now become a quasi-empirical practice – this is in relation to his own
work, but it might perhaps equally be applied here.
MW: Some of the quotes I have on the site agree: Martin
Gardner said something about how some problems of number theory might be
undecidable and might need a sort of mathematical ‘Uncertainty Principle’. Timothy Gowers wrote that the primes somehow
feel like experimental data, but at the same time he’s well aware that they are
rigidly determined. We find ourselves in
a situation where Michael Berry, studying spectra of quantum mechanical systems,
can take techniques he’s developed to classify or better understand certain
types of physical systems and apply them to the Riemann zeros, in order to
produce a hypothesis that we will get a particular ‘number variance’ in the far
reaches of the spectrum of Riemann zeros – then years later, you know, computer
power reaches the point where zeros can be calculated at that scale, the
‘number variance’ computed...and the graphs match up perfectly. It’s the first
time I’m aware of when a physicist was able to tell pure mathematicians
something new based entirely on his familiarity with physical systems.
C: Does the field then become de facto an experimental
one? You have the a hypothetical
physical system which will produce the system of vibrations which the Riemann
zeros seem to correspond to. And the
only way to find out whether there’s really any system which is adequate to
that would be by experimentation – in the same way that the Higgs Boson
hypothesised to glue together the results of quantum physics must now be sought
experimentally – hence the construction of CERN’s much-anticipated Large Hadron
Collider. Does someone have to build the
Riemann dynamical system?
MW: Michael Berry has said he’s absolutely convinced that,
if such a thing is physically possible, someone will make one of these things
in a lab, and then the Riemann zeros will actually come out on the instrument
readings. But at the moment there’s no-one actually conducting any experiments
which are getting anywhere near that, or even attempting to. You do have physicists taking certain ideas –
largely mathematical models intended for physical systems – and applying them
to aspects of the zeta function. There is an experimental branch of study of course,
you’ve got people looking at the Riemann zeros themselves, which contain a
wealth of data – we’ve got, I believe, hundreds of billions of them calculated
now – this is being done with grid computing8. The gaps between them and all
kinds of other things you can measure when you’ve got a set of seemingly random
real numbers, are being analysed using a variety of statistical methods, random
matrix theory is being applied. So these
are, to some extent, experimental studies.
Marek Wolf (Institute of Theoretical Physics , Wroclaw ) experimentally detected a widespread
physical phenomenon called ‘1/f noise’ in the distribution of prime
numbers.
The prime numbers continue out to infinity, we’ve known they
go on forever since Euclid ,
but we can only calculate them up to a point.
We tend to think our current computers are ‘powerful’ , and we think we
can find ‘big’ prime numbers – you know, now and again one will even make it
into the news. But there’s no such thing
as a ‘big’ number, this is what I always try to get across to laypeople –
because the number system goes on forever, however far we look, proportionally
it’s still an infinitesimal step into an infinite unknown .
C: And, of course, in consequence, no matter how many zeros
are found, one never comes any closer to a proof of RH.
MW: Yes, exactly.
There’s the duality between Riemann zeros and primes, and so the same
idea applies with the zeros. We can
never calculate more than an infinitesimal proportion of them. Sometimes I use the analogy of large
telescopes: you’re looking out into space, and the more you can see, the more
you can deduce about the nature of the universe you livein. Analogously, we can ‘see into’ the number
line a certain distance, what we think is a ‘long way’ – but again, it’s
meaningless, really, to say a ‘long way’ or a ‘big number’. Of course we can
see further than we’ve ever seen before, so we can detect certain apparent
patterns which can give rise to hypotheses that we can then attempt to
prove. Similarly we can look further
than ever up the critical line now, and with hundreds of billions of Riemann
zeros we can test certain hypotheses and generate new ones. So there’s an experimental element in
that. But as far as the hypothesised
Riemann dynamics goes, the quest to try and pin down something like a Riemann
dynamics isn’t really being furthered by experimental science as such, rather
the progress seems to be coming from mathematicians like Connes, Lapidus and
Christopher Deninger (University
of Münster ). But these people – well, certainly Connes and
Lapidus – do have a very broad interest in large areas of both mathematics and
physics, which is what makes their work so interesting. It would be misleading
to suggest that mathematics has become an empirical science, since exact
formulations are still possible – even in these more hazy areas – at least we
can’t rule out the possibility of exact formulations. But an empirical approach
has become potentially useful. In connection with this, I should mention the
emergence of probabilistic number theory, which in itself raises huge
questions. Probabilistic number theory
effectively started in 1940 with the Erdös-Kac theorem which I mentioned
earlier, the discovery that the number of prime factors in ‘large’ integers has
a kind of random distribution which follows the Gaussian distribution or bell
curve. That discovery led to a whole
outpouring of theorems and conjectures which have collectively become known as
probabilistic number theory, where you apply the methods of probability theory,
and make use of the key idea that divisibility by a prime p and divisibility by
a different prime q are ‘statistically independent events’, one has absolutely
no bearing on the other. When you deal
with probability you deal with this idea of independent events – well, these
are arguably the most independent ‘events’ there can ever be. Physical events
in any well-prepared experiment, you might think they are independent; but
ultimately every particle of the universe is gravitationally pulling on every
other particle, everything is linked, although the effects are generally
negligible and impossible to quantify.
The only place where things are totally independent is in the number
system – the divisibility of an integer by two different prime numbers. So here
is a place where you can apply probability theory, where everything is entirely
exact, where you can let your n tend to infinity and that actually refers to
something. Probabilistic number theory
allows you to prove things about prime numbers and about the number system
generally, using the techniques of probability theory, and that seems highly
counterintuitive. The fact that it works at all raises questions which are more
like ‘mysteries’ than formal mathematical problems. There are three separate areas worth
mentioning here: the emergence of probabilistic number theory, the
effectiveness of the analogy with statistical mechanics – partition functions,
etc. which I described earlier – and then the rôle of random matrix theory,
which was developed for modelling subatomic phenomena, but then was
accidentally found in the 70’s to apply directly to the theory of the Riemann
zeros. So you’ve got three separate
areas of randomness-based thinking, stochastic disciplines if you like. They deal with large systems which have too
many components to keep track of individually – these components must be
treated almost sociologically, as populations, and subjected to probabilistic
or statistical thinking. All three areas
have been effective in furthering our understanding of the number system. Now, again, mathematicians would tend to
focus on at most one of these things, see what could be achieved and perhaps
make a few sober remarks on what it all might mean. But to me, the fact that you’ve got these
three areas, all of a stochastic nature, shedding light on the primes and the
Riemann zeros, points to something very strange. We’ve got primes, the most basic things in
the universe as we experience it – the
sequence of prime numbers is the most basic non-trivial information there is,
it’s the one thing you can’t argue with anyone about, it’s the one thing all
life forms in the universe could potentially relate to. And yet in some ways they seem to be best
understood using a type of analysis more appropriate to weather systems,
roulette wheels, boxes of gas, etc. I’ve always thought of probability theory
as a slightly ‘tainted’ branch of mathematics for three reasons: Firstly, it’s
origins are not entirely honourable – I seem to recall that it has its roots in
an historical accumulation of gambling techniques which got distilled into a
formal theory by Pascal. Secondly, it
deals with ‘events’, repeatable
‘events’, which are
categories of physical phenomena, ‘occurrences’ of one type or another which
can be quantified, measured, counted, numerically analysed, etc. whereas truly
‘pure’ mathematics doesn’t rely on anything in physical reality in quite this
way. Finally, by its very nature, probability theory tends to deliver imprecise
information – there’s always a margin of error. And yet this system of thought,
which has been developed in order to deal in an approximate way with large,
complicated physical systems, seems so perfectly applicable to something which
is so fundamental, which is characterised by an absolute precision, and which underlies
everything else – the distribution of primes!
It’s as if we’ve got something back-to-front. It’s similarly interesting that probability
should have such a fundamental rôle in quantum mechanics: an ultrasimplified
account of what QM tells us is that, insofar as it can be understood as being
made of particles, the universe can also be understood as being made of ‘fields
of probability’ . Probability theory in
a casino, yes; or in a meterology lab... But prime numbers? The fundamental
level of matter? These are things we
instinctively feel should be totally deterministic and rigid. And to me, this suggests we’re looking at
something the wrong way ‘round – something’s been turned on its head. It’s as if ‘randomness’, or some essential,
almost esoteric quality associated with randomness – that quality evidenced in
our failure to really understand what we mean by ‘randomness’ – is emanating up
from these fundamental
realms. We’ve been
dragging it down from the macroscopic scale, the casino scale, down to this
micro-level, in a numerical and physical sense, and finding that it helps us
understand something. But I feel
something’s back-to front there. A
mathematician called R.C. Vaughan states in one of my archived quotations that
it’s obvious that the prime numbers are random, but we don’t know what
randomness is. And there is a real
problem with defining randomness. There
are several definitions, information theorists, probability theorists, have put
forward definitions of what it means for something to be random. The
definitions overlap to a large extent, but ultimately, when is a string of
digits random? If I give you a block of
a thousand 0’s and 1’s, it might look completely random, it might even pass numerous
tests run on it for randomness…but then I could reveal, well actually, no, it’s
a thousand digits of π starting from the two-millionth digit. And then it’s not random anymore. So there’s the whole question of what
randomness is. This is one of the
central themes that fascinates me: where does this notion come from, where does
it lead us in our understanding of the reality we inhabit, and why does it tie
in so closely with both the fundamentals of the number system and of particle
physics? And then there’s the difficulty
of talking about having two of anything, that in order to have two of anything
you have to have a category which those two objects both belong to. But the categories are always imprecise. We have to partition spacetime into blocks
with ‘fuzzy’ boundaries, and then attempt to match aspects of these blocks up
with some ideal which exists in a sort of mental hyperspace, a Platonic realm
of sorts. So we’re projecting these
categories onto the universe which actually aren’t intrinsic in the universe;
we’re setting out these boundaries, but the boundaries are blurry. Yet, despite the possible problems this
blurriness might cause, on a practical level we’re able to then extract data
which fits remarkably well against certain probability distributions. The most
ubiquitous and I think the most important one is the Gaussian or bell curve –
and this, as we can see from the Erdös-Kac theorem, has a mysterious and
fundamental relationship with the number system we’re using to count members of
our fuzzy-boundaried categories in the first place. The effectiveness of
statistical inference in the hard sciences and the social sciences – I’m sure
this would be widely disputed, but I feel there is a mystery there which isn’t
really being acknowledged, and it has to do with how we can name and count
anything, and how, when we do name, count and measure things they seem to
collectively accord with these ideal mathematical blueprints or templates. That says more about the way our mental
hyperspace is being mapped onto the physical universe than about anything
intrinsic in the physical universe.
C: When you look at the local you expect to find precision,
whereas with the global you’re happy with statistical data. Here we’re looking at these local, precise
conditions and there seems to be randomness ‘built into’ them in a way that’s
not immediately comprehensible: After all, they’re not statistical aggregates
in any obvious sense.
MW: Yes, the set of positive integers is in a category of
its own, there’s just one number system. Yet, it’s as if this entity – if we
take the positive integers, the primes and the zeta function as aspects of a
single thing, different aspects of the same entity – rather than being a
carved-in-stone, unique thing, is actually just one example of a class of
things, and we’re able to apply statistical analysis because of that. This is
why, when I started finding out about these things, I felt my ‘prime evolution’
thing might have something in it, this idea of the number system being a frozen
state of something which had previously inhabited many different states. I’ve had certain quite critical,
serious-minded people react to some of my more sensational suggestions by
saying, well all this number theory and physics, there’s nothing mysterious at
all — the universe follows mathematical laws, so of course we’d expect certain
aspects of number theory to show up in the physical world. If they’d look a bit deeper into this, they’d
see what I meant: yes, it’s not
surprising, given that maths underlies all of physics, that we might get, say,
particular values of the zeta function showing up in string theory, or the
theory of integer partitions relating to Bose-Einstein condensates or whatever:
you get these odd little instances of number theory/physics correspondence;
I’ve catalogued a lot of these in my web-archive. But that’s not the really interesting
stuff. What’s much more surprising is
the way physics seems to be pointing the way for understanding the zeta
function, and often this is statistical or stochastic physics, as if the zeta
function – and in some sense, then, the number system — is just one example of
a more general phenomenon. And I don’t
think anyone disputes the spectral nature of the Riemann zeros now. But it’s not one archetypal ubiquitous
spectrum we see showing up all over physics.
If we saw ‘the zeta spectrum’ – as it might be called – everywhere, then
it would somehow feel a lot less mysterious.
We’d probably feel quite comfortable with such an affirmation of the old
idea that the number system directly underlies the structure of the physical
universe. But the Riemann zeros take the
form of an almost disconcertingly arbitrary-looking spectrum, never known of by
humans prior to the late 1850’s. In the
very recent past we’ve been confronted with the fact that it has all the
fingerprints of membership in certain classes, very wide classes, of very
specific physical systems, as if it’s just one element of a whole class, a
population of things. So it’s a bit like
the way you might be able to, based on the postcode of a UK resident,
predict certain things about his or her attitudes, abilities, tastes, whatever
– because you’ve got statistical information about the population, you can make
plausible hypotheses about this specific individual.
And it’s as if the primes-zeta entity, whatever you want to
call it, despite its seemingly fundamental, unique status, is just one
individual in a wider class of things.
But the space in which that class exists is something we haven’t even
begun to imagine might exist, or we haven’t got any access to. So we have this image of a frozen system,
something congealing into a state, and then…it’s as if you walked into a
concert hall and caught the last note of a symphony, and everybody’s applauding
ecstatically and you’re wondering, what’s all the fuss about? You didn’t witness the process that led up to
that last note, and it’s like, with the prime numbers, we’re just walking in on
the last moment, the culmination of something.
As if there was a whole
‘symphony’ that led up to that, and humanity may be on the verge of revealing
it. * C: All of the foregoing seems to suggest that what we think of as simple
and elegant foundations may in fact be the eventual product of something which
is rather complex, even beyond our comprehension. So we’d have to separate out what seems simple
and elegant to us, from what is actually fundamental in the universe, and this
is another sense in which mathematics mirrors the condition of theoretical
physics, in which, characteristically, the further we go towards the
fundamental, the stranger things become (string theory being a case in point).
Rather than defining the primes on the basis of the
supposedly fundamental and simple number line, in fact it seems that, when we
look through this complex theoretical-mathematical prism you have described, there’s
actually something more fundamental about the primes. The primes themselves
produce...
MW: ...the number line, yes, you can see it that way. I came up with this naïve idea, before I
really learned any of the more serious stuff, this was after I had been
thinking about the Erdös-Kac theorem, the primes and the Gaussian distribution,
but before I ‘experienced’ the dynamical aspect of the primes. I was thinking
about how we tend to construct the primes.
We’re taught to construct the number line starting with one and then
using the Peano axioms, you know, there’s an axiom that basically says, what ever number you arrive at you can always add
another one to it. And I thought, hold
on, where does this come from, this idea that you can always add another one,
and I started to question that as something that might not be as obvious as it
first seems. There’s some hidden
assumption thereabout order, time or something, I felt. And I thought, well, there’s an alternate
approach we could adopt here, we could start with an infinite alphabet of
meaningless symbols, an infinite alphabet of meaningless yet distinct symbols,
and then create the dictionary of all possible words of finite length out of
that alphabet.
This alphabet of symbols would correspond to the prime
numbers. By combining the symbols in all
finite possible combinations, you generate the set of words in your
infinitely-long dictionary — this corresponds to the fact that if you combine
the primes in all finite multiplicative combinations, you get the set of
positive integers. Except now there’s no
sense of order: Because we’re not starting with the positive integers, we don’t
need to think of one prime number as being ‘greater than’ another. The primes are not embedded in the positive
integers yet, they’re just these free-floating abstract symbols. So I used to try and conjure up this image of
bubbles floating in an imaginary space, each with an exotic glyph, a symbol
from our ‘alphabet of primes’ on it. The
idea is that you can then join any number of these bubbles in any combination,
including repeats. All possible such
bubble-clusters are to be found floating somewhere in this space. Some are larger than others in the sense that
there are more bubbles in the cluster — that is, more prime factors — but
there’s no sense of a cluster coming ‘before’ or ‘after’ another cluster. It’s
only when you cross the Rubicon of deciding which alphabetic symbol is going to
be your ‘2’ that you start to create some sense of order. So I had these hints
and intuitions — I couldn’t really pin them down to anything very rigorous —
that we’ve been thinking about randomness and the fundamentals of reality in a
back-to-front way. We’ve got ourselves
into a kind of confusion where everything seems immensely complicated when we
delve down to the fundamentals of either the number system — which seems at
least partly to inhabit the realm of
psyche — or of the physical world, the world of matter — just open a textbook
on analytic number theory or quantum mechanics and you’ll see what I mean. I felt this issue could be addressed if we
examined some of our ‘obvious’ assumptions.
We think we’ve taken the obvious construction — that is, you start with
one, then you add one, and then you add another one, this idea you can always
add another one. Rather, what if we
start with the primes, and build the number system up that way? The whole ‘order’ thing then becomes more of
a ‘phenomenon’ than something axiomatic...
C: Coincidentally, the ‘legendary’ Dr. Daniel Barker also devised a notation system for the
positive integers based upon prime factorisation, which is very close to what
you’re talking about here.7 You have these inseparable lexicographical units
from which numbers are composed, and they could be in any order. He was interested in place value as a
culturally-repressive numerical practice, and this was a way of doing away with
place value completely. Each number
would just be like a collection of boulders or something.
MW: The lexicographical approach, yes. I’ve tried to get
this across to some lay-people I’ve talked to.
There’s the fundamental theorem of arithmetic — literally the most
important thing we know about the number system. And no more than 0.1% the
population have even heard of it, I’d guess.
It basically says that every integer breaks down uniquely into prime
factors. And we’ve got this strange
situation where almost nobody knows this, this simple fact, the most important
thing we know about the number system.
This is straying into other territory, but to me, humanity’s
relationship with number is rather unhealthy, because we’ve built this entire
civilisation around the mathematical sciences, and yet the ordinary population
knows nothing of the basics, and often finds mathematics a source of fear and
unpleasantness. I try and conjure up
this image of these bubbles, the fact that the clusters can be as large as you
want, you can have huge ‘planets’ of prime factor bubbles joined together —
there’s no upper size limit. And so something like the greatest common divisor
can then be explained very simply, it’s just the intersection, literally where
the two clusters intersect. The least
common multiple can be similarly explained.
Prime numbers distinguish themselves from non-prime integers because they
are individual bubbles. The integer 1 is the absence of any bubble, the empty
background space, the blank page in the “dictionary” I mentioned earlier… And then you imagine stringing the entire set
of clusters out in a line according to this ‘order’ thing, and you start to see
that there’s a counterintuitive variation in the sequence — you get small
clusters, huge clusters and single bubbles all intermingling according to no
sensible scheme. And this is the sort of thing that I’d eventually like to push
further out into the public domain just to see what sort of effect it would
have, when people start looking at their supposedly familiar number system in
this new light. Because people tend to
think of the number system like a row of boxes of cereal in a supermarket, just
identical units stacked together, a sort of homogeneous featureless thing that
just goes on: each number is just the previous one plus one, there’s nothing
much there, nothing of interest. And it
was Frank Sommen, a really remarkable, imaginative Flemish mathematician who I
worked with during my PhD studies, who once said to me, every positive integer
is a different animal. I came to see
exactly what he meant: each one’s got its own ‘anatomy’, every one’s a
different story, and that starts to become apparent as soon as you realise that
each integer factors in a unique way into prime numbers.
PC: You have made numerous
interesting observations in the above section of the discussion.
However I will confine myself here
to just three important issues.
You mention Timothy Gowers and his
contention that the primes somehow feel like experimental data, but at the same
time he’s well aware that they are rigidly determined.
This simply reflects the standard
reduced view where the primes are interpreted in an absolute quantitative
manner.
However appropriately understood,
the primes have complementary quantitative and qualitative aspects which
dynamically interact in a two-way fashion.
This implies that an important uncertainty
principle applies to the very nature of each prime.
So, for example as we have seen
with the number 2, its two units possess a relative independent identity (as
separate units) in quantitative terms and also a relatively interdependent
identity (as shared units) in a qualitative manner.
Therefore from a strict experiential
perspective, which is the only way we can truly understand number, the nature
of a prime cannot be absolute.
We can however approach close to
this absolute state by emphasising the independent quantitative nature of the
prime to the almost total exclusion of its qualitative counterpart. And of
course this is what happens in the conventional mathematical interpretation of
prime numbers.
However as you have illustrated very
well we can never attach absolute status to any category of understanding.
One might initially counter that
whereas this might strictly apply with respect to physical phenomena, where
some doubt as to appropriate classification may always exist, it does not apply
to the mathematical world of number.
However this is not in fact the
case. The wide held belief in the existing nature of number in itself reflects
a tight social consensus of professional mathematicians all abiding by the same
rules and procedures.
However this consensus is by no
means absolute, especially when someone dares to seriously question these arbitrary
rules. For example my notion of a prime differs radically from the present
consensus view. So even though we are dealing here with the mathematical
category of a prime number, consensus as to its nature is clearly relative.
Just as in experience, one can approach
the absolute extreme where a prime appears totally fixed in quantitative terms
as rational form, one can also approach the relative extreme in psychological terms,
where the same prime now appears fully dynamic as a purely intuitive energy
state.
And actual experience necessarily
entails a trade-off as between these two extremes, which can change, implying
continual transformation with respect to number.
So, just as an uncertainty
principle applies to particles in quantum mechanics, it equally applies to
number in the relationship between its quantitative and qualitative aspects.
And in conventional mathematical terms, the quantitative has blocked out to a
remarkable extent any explicit recognition of the holistic qualitative aspect
of the primes.
However just as the uncertainty
principle applies to the internal nature of each prime, equally it applies
externally to the collective nature of all primes.
So we can indeed start by
attempting to view the collection of primes in a merely analytic quantitative
manner (as absolute unchanging forms). However this is strictly meaningless in
the absence of a corresponding qualitative shared relationship existing between
such primes. And this qualitative
relationship is provided by the zeta zeros which are then in relative terms
understood in a holistic intuitive manner.
So once again in understanding the
primes we have the rational extreme approaching completely fixed forms (in
quantitative terms) and the corresponding intuitive extreme, where through the
zeta zeros they approach a purely elusive formless state (in a qualitative
manner).
And a truly comprehensive
understanding is subtler still when one allows for the dynamic shifting of
reference frames in experience as between quantitative and qualitative (and
qualitative and quantitative) respectively.
So equally in internal terms with
respect to each individual prime, we can view its component units in an
analytic quantitative manner, whereby in complementary terms the prime itself
is now viewed relatively in a holistic qualitative fashion; likewise externally
with respect to the collection of primes we can view the zeta zeros in an
analytic quantitative manner whereby relatively, the primes are now viewed as
constituting the factors of the natural numbers in a holistic (qualitative) fashion.
So both in internal and external
terms an uncertainty principle exists as to the precise nature of the primes,
which can vary as between two extreme points, which are analytic and holistic
(and holistic and analytic) with respect to each other.
Indeed this can be used to perhaps
clarify somewhat the insight that both you and Lapidus share regarding the
“dynamic flow” of the primes.
As we have seen above, the true
experience of the primes (externally considered) is inherently dynamic, varying
as between the two extremes of absolute rigid forms and purely relative
intuitive energy states (represented by the zeta zeros).
And because of the holistic
complementarity of the physical and psychological aspects of reality, this
equally entails variation as between the two corresponding extremes of physical
forms and energy states respectively.
Thus in both physical and
psychological terms, a continual dynamic flow properly constitutes the true
nature of the primes. And as number (with respect to its quantitative and
qualitative aspects) represents the deepest encoding of reality, this flow is
inseparable from all the phenomenal events of everyday life.
Thus the tension between addition
and multiplication, which equally reflects the tension as between the
quantitative and qualitative aspects of mathematical understanding, cannot be
properly resolved until one is able to hold together in a balanced manner in experience
these two extremes of the primes, as rigid forms and pure energy states respectively.
Of course the deeper implication here
is that a comprehensive mathematical understanding must thereby incorporate the
marriage of specialised rational interpretation (related directly to analytic
type appreciation) with a highly refined contemplative vision (related directly
to pure holistic awareness).
And once again I can only repeat
with every fibre of my being that an enormous shadow hangs over the
mathematical profession in its total inability to recognise the crucial importance
of the holistic aspect, which intimately affects appreciation of every
mathematical notion.
In a misguided attempt to create a
merely quantitative interpretation of mathematical symbols, it has constructed
a hermetically sealed chamber with respect to accepted assumptions and
practices effectively insulating itself from any outside criticism. And this
represents an extremely unhealthy state for the future development of
mathematics!
If I was in dialogue with Michel
Lapidus or indeed any leading members of the profession, I would attempt to directly
communicate the great urgency of this matter. So they now need to seriously
address the very assumptions regarding mathematics that they have blindly
accepted heretofore without question.
Put bluntly, these assumptions are
no longer fit for purpose. Indeed despite the undoubted marvellous achievements
of the quantitative approach, strictly they never were fit for purpose!
Unfortunately however, the mathematical community is still a very long way indeed
from recognition of this fundamental fact.
So for all their undoubted
brilliance and technical expertise, without admitting the equally important
role of the holistic aspect in their interpretation, mathematicians cannot hope
to grasp, among other things, the true significance of the Riemann Hypothesis.
You mention the Erdös-Kac Theorem
in several places, which clearly has had a major influence on your thinking.
I would have some reservations as
to the manner in which you attempt to distinguish purely mathematical Gaussian
distributions from their physical counterparts, when in the end both are of a
strictly relative nature.
However I take your point that
because the supply of mathematical data is infinite – though I would prefer to
say finitely unlimited – that a much more accurate bell curve can theoretically
be obtained to model the distribution of distinct prime factors. But again,
this accuracy will always remain of a relative rather than absolute nature. In
any case we can never have fully independent data in mathematical terms, which view
simply reflects the standard reduced quantitative approach to number.
Once again if data were indeed
independent in absolute terms then it would not be possible to relate them in
any meaningful sense!
And from a practical perspective it
should be stated that the Gaussian distribution is of limited use in making
predictions, due to the fact that within the typical range of computation, a
number on average contains very few (distinct) prime factors.
So for example with respect to
(distinct) prime factors, the mean average for a number composed of 10,000
digits would be about 10. So if we attempted to represent the distribution of
these prime factors with a normal bell curve, it would be very far from
perfect.
Indeed we would have an inevitable
lack of symmetry in terms of the two sections of the curve, left and right of
the mean. This is due to the fact that whereas there is an obvious limit to how
few (distinct) prime factors a number can contain (i.e. 1, which for a number
with 10,000 digits is just 9 less than the mean), effectively there is no upper
limit to the maximum number of factors.
In this case, the normal curve would
be best for predicting the probability of a number containing those (distinct)
prime factors that are centred close to the mean and thereby occur most
frequently. So for example there is about a 68% chance that a number chosen at
random would contain on average between 7 and 13 (distinct) prime factors.
However it would be especially inaccurate in predicting – say – the percentage
of numbers containing just one (distinct) prime factor.
However given these qualifications,
the Erdös-Kac theorem is remarkable in the way that it can theoretically demonstrate
the random nature of the distribution of prime factors.
But from another perspective, it is
not really surprising. So for example, in a simple coin tossing experiment
where the outcome for each individual (unbiased) coin is random, one would
expect with numerous sample trials (of the same size) the mean frequency of H’s
to approximate a perfect normal distribution as the number of tosses in each
trial continually increases.
So accepting that the occurrence of
each individual prime is random, we would be led to expect a similar outcome
when calculating the frequency of (distinct) prime factors.
However, the tendency for both distributions
(coins and prime factors respectively) to be approximated by a perfect bell curve
would not necessarily imply that the two distributions are random in exactly
the same manner. So, because it inherently relates to the dynamic nature of
number, once again randomness is a concept that lends itself to a relative
rather than absolute definition.
From my perspective, what is really
interesting however is that the Erdös-Kac Theorem indirectly highlights the
fact that there are indeed two complementary ways, in which the relationship
between the prime and natural numbers can be understood.
So from the more common external
perspective, we can look at the distribution of the individual primes among the
natural numbers.
However from the related internal
perspective, we can look at the collective distribution of (distinct) prime factors
among each natural number.
These two distributions are related
to each other in a dynamic complementary manner. And this then further implies
that the relationship as between the primes and natural numbers is one of
mutual interdependence.
Thus externally, from the
quantitative (analytic) perspective, primes appear to determine the magnitude
of the natural numbers; however relatively, from the holistic qualitative
perspective, it is the reverse with the spacing between the primes apparently determined
by the natural numbers.
Then internally, each individual
prime appears in qualitative (holistic) terms to be determined by an ordered
set of natural numbers; from the quantitative (analytic) perspective, each
natural number appears to be determined by a random set of prime factors!
It should be equally possible to
provide a corresponding external Erdös-Kac theorem to demonstrate the random
nature of the individual primes among the natural numbers.
Thus when
n is very large, we could take successive similar sized random samples, preferably
in the vicinity of n (that are large in absolute terms though very small
relative to n).
The actual
number of primes occurring in each sample will then cluster around an average mean
value with deviations positive and negative occurring with respect to this
value.
And the
contention is that the deviations from the mean value are normally distributed,
approaching a perfect bell curve when both population and sample size increase
without limit.
So these complementary
distributions can then be used to demonstrate again the two-way dynamic
relationship connecting the complementary notions of randomness and order with
respect to the number system.
Thus in terms of the external
relationship, though each individual prime is random the collective
relationship of these primes, with respect to the number system, is of a highly
ordered nature.
Then in terms of the internal
relationship, though each individual prime is of a highly ordered nature (so
for example that 3 entails 1st, 2nd and 3rd
members) the collective relationship of prime factors (with respect to each individual
natural number) is of a random nature.
When one understands their true
dynamic nature, with an uncertainty principle applying that mirrors closely that
of quantum mechanical particles, then there should be little difficulty in
appreciating why probabilistic number theory has proven so successful in terms
of the primes.
In fact many surprising connections
involving probabilistic notions can be made directly with the Riemann zeta
function.
For
example, the probability that a number chosen at random will be composed of
non-recurring prime factors = 1/ζ(2) = 6/π2 = .6079
(approx).
This could
also be expressed as the probability that a number is square-free or
alternatively the probability that a number contains a prime factor (or
factors) that occur(s) at most once.
Then in
more general terms, the probability that 1 or more prime factors will occur at
most n times = 1/ζ(n + 1) – 1/ζ(n).
Therefore
for example the probability that 1 or more prime factors will occur at most
twice = 1/ζ(3) – 1/ζ(2) = .8319… – .6079… = .224. And in fact there are 23 such numbers
from 1 – 100.
Equally 1/ζ(2)
is the probability that two whole numbers chosen at random will not contain a
common (proper) factor.
And then
in more general terms 1/ζ(n) is the probability that n whole numbers chosen at
random will not contain a common factor.
So for
example, the probability that 3 whole numbers chosen at random will not contain
a common factor = 1/ζ(3) = .8319….
You make the very interesting
observation here and in the trilogy that the structure relating to the spectrum
of zeta zeros looks like one particular example of a class of physical systems,
whereas normally we would expect it to be the other way round, with a physical
system’s behaviour corresponding to a particular class of mathematical
structures.
A considerable problem here relates
to the fact that we have been conditioned to consider mathematical structures
in fixed absolute terms.
However properly understood as we
have seen, number is dynamic (with complementary physical and psychological aspects).
And this is due to the fact that number represents the inherent encoding of all
phenomena as to their fundamental nature.
So, I have no difficulty therefore,
in accepting that the spectrum of zeta zeros represents a specific example of
an inherently dynamic mathematical system.
I would then see from this
perspective that the physical quantum system (of energy states), which has been
the subject of so much Riemannian research, closely resembles the original dynamic
mathematical system of zeta zeros. However, at the extreme point where a physical
system precisely matches its mathematical counterpart, by definition, it loses
any distinct physical identity!
Holistic understanding can then
suggest the reason why there is such a close approximation here as between both
systems.
As the sub-atomic level lies close
to the formless holistic ground of matter, the nature of both physical and
mathematical reality becomes much more closely intertwined at this stage. So
when one probes deeply within the most transient manifestations of matter, one
is left with a physical understanding of reality ever closer to the
mathematical (as the encoding of its underlying nature).
And though I would be heavily
critical of its mere analytic mode of interpretation, superstring theory
testifies very well to this fact, where it constitutes more a strictly
mathematical than physical appreciation of nature.
I will now address the issue as to
why the spectrum of zeta zeros is apparently associated with a particular case
(quantum chaotic) of a more general class of physical systems.
In dynamic terms, number as we have
seen, combines both analytic and holistic aspects. Whereas the analytic can be
directly identified in physical terms with the classical approach, the holistic
aspect by contrast can be more naturally identified with the wave nature of
quantum reality.
Thus to properly mirror both dynamic
aspects of number, it is reasonable to suppose that it should thereby combine a
mixture of the classical and quantum approaches in physical terms. Also,
whereas the phenomenal nature of physical reality loses any strict identity
below the Planck length, it can still be given a meaning in classical mathematical
terms. So quantum chaology, as proposed by Michael Berry, seems to me as an
attempt to somehow marry the physical with the mathematical nature of reality
at its closest approach to a purely formless state.
Thus I am not surprised that a
close correspondence has been shown as between the zeta zeros and the energy levels
of a postulated quantum chaotic physical system. However any physical system
that exactly replicates the zeros would by definition have a purely
mathematical identity (as the hidden encoded nature of the system).
So in this sense, one could
truthfully say that the postulated physical system, for which Berry and his colleagues are searching, is
in fact none other than the – holistically understood – set of zeta zeros,
where the highly dynamic physical nature of reality approaches identity with its
mathematical encoding!
This also implies that the ultimate
physical explanation of reality must be therefore of a mathematical nature
(with respect to both its analytic and holistic aspects).
And this explanation relates
directly as we have seen to the dual dynamic nature of the number system which inherently
encodes the nature of reality in quantitative and qualitative terms.
In this respect, current physical
theories such as superstring theory are strictly of a secondary nature. And the
great limitation of these theories is that, as presently constituted, they are
interpreted in a merely analytic manner. So no means presently exist therefore
for showing how their findings can properly resonate with a truly integrated
appreciation of the overall nature of reality.
It is vital also at this point to
recognise a key distinction as between scientific and artistic type
appreciation of phenomena.
If we take for example flowers such
as roses to illustrate, from the scientific perspective the important
requirement is that each individual rose can be identified as a common member
of the overall class of roses. In this context, the roses are given a merely
impersonal cognitive identity in quantitative terms, thereby enabling them to
be employed as scientific data.
However it is somewhat the opposite
from the artistic (aesthetic) perspective, where each rose is now recognised in
terms of its unique individual qualities which are conveyed through the affectively
inspired senses.
Recent research has concentrated
almost exclusively on attempting to demonstrate the physical links with the
zeta zeros.
So again whereas this scientific interpretation
conforms directly to the analytic aspect of understanding (where the parts are
collectively related to the whole) artistic appreciation in complementary terms
conforms to the holistic aspect (where the whole is contained in each individual
part member).
Therefore when it comes to the
holistic interpretation of the zeta zeros (as complementary to the analytic
primes) we would expect them to conform to a special case of a more general
phenomenon.
However again the deeper message
here is that mathematics itself in its understanding of number must now move
directly to embrace the qualitative aesthetic aspect of reality, just as the analytic
had formerly been identified with the quantitative rational aspect.
However this realisation then
greatly increases the mystery of how the qualitative aspect of phenomena (with
unique diversity for each member of a class), which is inherent in all number
appreciation, can then be made consistent with the quantitative aspect of interpretation
(where all are collectively understood as common members of the same class).
This mystery is then tied up, I
believe, with the general class of L-functions, of which the Riemann zeta
function serves as the archetypal key member.
It is as if the Riemann zeta
function provides us with the basic colours or tones to enable holistic (qualitative)
type appreciation of number in a manner that harmonises it – through the requirement that all the zeta zeros lie on the
critical imaginary line
– with corresponding
analytic (quantitative) interpretation.
The other L-functions, that are
potentially infinite in number, relate to various ways in which these colours
and tones can be configured. And all these in turn have their distinctive zeros
and Riemann Hypotheses thereby enabling both holistic and analytic type
appreciation to be reconciled with each other in every single case.
And then by extension all these
mathematical relationships i.e. L-functions – with respect to their complementary
holistic and analytic meanings – can now be profoundly understood as
underlining the very means by which artistic sensibility and scientific
understanding can in turn be fully harmonised with each other in human experience.
Precious little attention however
has been given in research to the psychological role of the zeros, which is of
equal importance to the physical aspect.
So this is an issue to which I have
devoted considerable attention in recent years.
To your credit, you go much further
than your experimental colleagues in recognising that the zeta zeros should
represent some deep archetypal imprint of consciousness.
Though we have already addressed
this to some degree earlier in the discussion, because of its great importance
it may be valuable to address it again.
We recognise the significance of
the prime numbers in analytic terms as they form the basis of our quantitative
appreciation of reality.
The way to understand the
fundamental role of the zeta zeros is to now look at their significance as
directly complementary, which befits a dynamic appreciation.
Thus the great importance of the
zeros from a holistic perspective, simply stated, is that they form the basis
for our qualitative appreciation of reality.
I should perhaps clarify the nature
of this qualitative appreciation again in that it is directly related to the
manner we instinctively relate to the world through sensation and feeling!
In earliest childhood, as we have
seen, experience is necessarily primitive in nature, where the (unconscious)
holistic desire for meaning is directly confused with specific (conscious) phenomena.
So the holistic appreciation of the
zeta zeros can best be viewed as achieving the full developmental solution with
respect to eradicating the involuntary nature of primitive instinctive desire.
Thus through the zeta zeros, the sense recognition of phenomena can be properly
combined in a sustained manner with the most refined form of contemplative type
awareness. Thus as well as possessing a unique individual identity, each sense
phenomenon now continually radiates a numinous quality as an archetype of an
eternally present spiritual moment.
At our present stage of evolution, few
realistically have the capacity to go beyond mastery of the early zeta zeros.
So the understanding in holistic terms of each new zero (on the imaginary
scale) can then be seen to represent a further degree of intuitive refinement in
the contemplative manner one thereby experiences sense phenomena.
So in the future, I have little
doubt that the holistic appreciation of the zeta zeros will come to play a
central role in the advanced meditative practice of spiritual programmes of
development.
However from another perspective,
where reference frames are reversed we have the complementary relationship of
the zeta zeros (as analytically understood) and the holistic appreciation of
the prime numbers.
Now there is little need to say
much here regarding the analytic interpretation of the zeros, which have
received intensive investigation in both mathematical and physical terms. So
mathematicians are well aware at this level of the important role which the
zeros play in relation to the quantitative distribution of the primes.
However the corresponding holistic
interpretation of the primes, though of immense potential significance has
received little or no attention.
I will try and briefly comment here
on its vital importance.
Indeed initially it came as a great
surprise, when I realised the enormous significance attached to the holistic
notion of the primes and natural numbers (which in dynamic terms are always
necessarily intertwined with each other).
It is easy to see in quantitative
terms how 1 represents a specific member of the natural number class.
Now the significance of 1 in a
holistic context relates to the fact that what is conventionally recognised as
mathematics represents just one possible form of interpretation, which is
1-dimensional in nature.
So just as in quantitative terms, 1
represents a specific member of the natural number class (now defined in a
qualitative holistic manner).
This entails that there are an
unlimited number of other possible interpretations (besides 1) for defining mathematical
reality.
So just reflect on this one point
for a moment! Though potentially an unlimited set of interpretations exists,
what is presently accepted as mathematics is rigidly confined to just one such
interpretation (i.e. 1-dimensional).
This interpretation is however unique
in that it is the only one that is absolute in nature (where the qualitative
aspect is formally reduced in quantitative terms).
All other dimensional
interpretations (≠ 1) are of a dynamic relative nature, entailing the
interaction of both analytic (quantitative) and holistic (qualitative) aspects.
Then, as I have already stated,
appreciation of the simplest of these relative systems (2-dimensional) serves
as a prototype for all other possible interpretations.
Therefore, though complexity
increases with the higher numbered dimensional interpretations, there is no
change in the fundamental rationale of relative dynamic interaction. In general
terms, these higher dimensions relate to the increasingly intricate ways the
two key polarity sets i.e. internal/external and whole/ part can be dynamically
configured in a fully balanced fashion with respect to reality.
What is remarkable is that an
equally important set of zeros is associated with the holistic nature of the
primes which is related to the circular number system (defined in terms of the
unit circle in the complex plane).
So we have here the holistic
equivalent to obtaining the roots of unity in the complex plane (with however
the default root of 1 ignored).
Just as we have a spectrum of zeta
zeros representing energy states, equally we have a more obvious spectrum of
energy states relating to all the natural numbers (except 1).
And once again, precisely because
mathematics, as presently understood, is confined the 1-dimensional approach,
it remains completely blind to these alternative dynamic interpretations.
And the truly marvellous fact is
that the Riemann zeta function (when correctly interpreted in dynamic
interactive terms) provides the key to relating through the entire complex
plane (except 1) the manner in which the analytic (quantitative) and holistic
(qualitative) aspects of number interact.
And in holistic terms these zeros,
which are related to the holistic circular number system (which I generally
refer to as the Zeta 2 zeros), play a complementary role to the Zeta 1 non-trivial
zeros that we have already discussed.
So, just as the senses can become
increasingly imbued with a refined type of spiritual instinct, likewise reason
can equally become imbued with a corresponding type of increasingly refined
intuition.
However there is a key difference.
Whereas the qualitative appreciation associated with the Zeta 1 zeros operates
in an immanent spiritual manner, by which one is better enabled to appreciate
the unique individual identity of each phenomenon, the Zeta 2 zeros operate, relatively,
in a corresponding transcendent spiritual fashion, by which one is enabled to
better appreciate the more general universal features of reality.
However, ultimately in their truest
integrated expression, both immanent and transcendent aspects become
simultaneously related in a two-way manner with respect to experience.
After concentrating so much effort
on getting to the bottom of the Riemann Hypothesis, it then came as something
of a shock to realise that this too represented just one specific example of a
more general class of mathematical objects i.e. L-functions.
I had discovered the LMFDB (The L-Functions
and Modular Forms Database), which comprises a massive highly valuable
resource of over 20 million L-functions.
And as modular forms and their
accompanying L-functions now seem to be ubiquitous in many areas of mathematics,
there is no limit as to the true total existing.
As you know, these L-functions
share key features in common with the Riemann Function. They all can be equally
expressed as a certain sum over the natural numbers and product over the
primes. They all have a symmetry line (through .5 on the real axis) and a
functional equation, where values of the function for s > 1 on the right
hand axis can be reflected to corresponding values for s < 0 on the left
hand axis. And each has its own Riemann Hypothesis with a set of zeros
(positive and negative) which are postulated to lie on the imaginary line
through .5.
The attempt to prove that the zeros
do in fact lie on the line has proven as elusive for all these L-functions (as
it has for the Riemann zeta function). And from my perspective, I have little
difficulty in appreciating why this is the case, as by the very nature of these
functions, a conventional proof (or disproof) is not possible.
In other words in the acceptance of
the real number line, the requirement that all the zeros in each case do indeed
lie on the imaginary line is already implied by such acceptance.
However the deeper issue for me is
the manner in which this cornucopia of other L-functions holistically relates to
the Riemann zeta function.
So adding just a little to what I
have already said on the matter, my present understanding would mirror that
associated with the unlimited set of higher number dimensions for interpreting
reality.
Here, 2 served as the prototype for
all dynamic relative interpretations of the number system. Thus, though higher
dimensional interpretations do entail ever increasing levels of complexity, the
basic manner of their overall interpretation remains unchanged.
Therefore in like fashion I would
see the Riemann zeta function serving as the prototype for all other
L-functions. So the fundamental significance of their holistic zeros for
qualitative appreciation would again be similar though allowing for more
variation and complexity in experience.
Though an extensive amount of
research has taken place in terms of the correspondents to the zeta zeros
associated with the Riemann Hypothesis, I am not aware of any similar research
with respect to possible physical correspondents of zeros associated with other
L-functions. However such correspondents, if identified, are likely to be close
relatives of the quantum chaotic system associated with the Riemann zeros.
These L-functions do in fact all
represent the ways in which varying arrangements of primes and natural numbers
(though potentially of a highly abstruse nature) can be dynamically related through
multiplication and addition to each other in a seamless manner.
In fact as I was writing this, I
began to see another way – bearing in mind earlier discussion on the notions of randomness
and order respectively – of fruitfully
highlighting the relationship of the general class of L-functions to their most
important original member (i.e. the Riemann zeta function).
Again all L-functions can be
defined as both a sum over integers and as a product over primes.
The Riemann zeta function in its
sum involves the most ordered arrangement of integers (relating to all the
natural numbers) and the most random product arrangement (in the inclusion of
all the individual primes).
So, as I have suggested earlier,
the Riemann zeta function must be meaningfully interpreted in a relative manner,
showing in this case how the extreme notions of order and randomness in terms
of the number system are fully related to each other in a dynamic complementary
manner.
All other L-functions entail
varying configurations with respect to order and randomness. So, on the one
hand, the pure order we associate with the succession of natural numbers is
sacrificed in varying degrees; likewise the pure random nature that we
associate with the full set of individual primes is likewise sacrificed. Thus
with respect to the unique configuration of order and randomness involved, each
L-function shows how the dynamic relationship as between both aspects is
consistently preserved in terms of the number system.
For example the Dirichlet beta function
is one of the best known of the simpler L-functions i.e. Dirichlet L-functions,
with its L-series:
1 – 1/3s + 1/5s –
1/7s + 1/9s – …
Though there is certainly a
discernible order here in the sequence of natural numbers, it is not quite as clear
as the archetypal pattern associated with the natural numbers.
So, all the even numbers have been
discarded and the sign of each successive odd term keeps alternating as between
positive and negative respectively.
The corresponding product over primes
expression is:
1/(1 + 1/3s).
1/(1 – 1/5s). 1/(1 + 1/7s). 1/(1
+ 1/11s).1/(1 – 1/13s). …
Here in corresponding fashion we
have a departure from the archetypal random pattern of the sequence of
individual primes. So the first prime i.e. 2 is omitted and then a definite
form of ordering is used to decide the sign of each term. So if on division by
4 a prime leaves a remainder of 3, the term is given a + sign; if however the
remainder is 1 (which is the only other option), it is given a – sign. So
though the sequence of primes is indeed random, the application of the
requisite signs is now based on an ordered procedure.
In this way, with L-functions, we
can have endless variation in the manner in which order and randomness are
defined (once we depart from the pure archetypal notions of both associated
with the Riemann zeta function).
However the crucial point is that for
each L-function, a dynamically coherent relationship is established as between
a specific type of order and associated type of randomness respectively.
So to sum up, there are an
unlimited number of ways in which order and randomness can be dynamically
related with respect to the number system (with the Riemann zeta function
representing the extreme archetypal example).
This then leads to a key question
which Erica Klarreich posed in her excellent “New Scientist” article: “Why do
the primes achieve such a delicate balance between randomness and order?”
The holistic explanation for this
feature lies in the close relationship as between notions of randomness and
order with the corresponding notions of independence and interdependence with
respect to the number system.
Therefore from a holistic
perspective, it is vital that the relative independence of each individual
prime in quantitative terms can be seamlessly synchronised collectively with
the corresponding relative interdependence of the primes (with the natural
numbers) in a qualitative manner.
And because number, in terms of
both its quantitative and qualitative aspects, represents the inherent encoding
of all created phenomena (in physical and psychological terms) this requirement
of the number system is likewise necessary so that each phenomenon, while
maintaining a unique differentiated identity, can also be seamlessly integrated
with all other phenomena in a collective manner.
Again we can see this feature
demonstrated especially well at the quantum level of physical reality. Though a
random nature attaches to particle interactions in individual terms, a
remarkably consistent order characterises their collective behaviour, so that
quantum mechanics can make predictions with an extraordinary degree of
scientific accuracy.
And when one can successfully
experience reality in this manner (where randomness and order are dynamically integrated),
object phenomena then radiate an intense numinous quality. So for example
someone advanced in contemplative awareness can appreciate how spirit is immanent
in each phenomenon (as its very source), while also recognising that it
collectively transcends all material phenomena (as their ultimate goal) in
seamless realisation of a shared qualitative interdependence.
This is why I am so confident that
eventual discovery of the neglected holistic dimension of mathematics will
naturally lead to an authentic spiritual transformation (not directly related
to the established religious traditions).
In this context, for one to successfully
deal with a highly demanding lifestyle, where spiritual equilibrium can be
continually maintained in the midst of intense phenomenal activity, at least
some of these other L-functions would be holistically relevant in experience. And
once again, this holistic appreciation would tend to naturally occur where
understanding of such functions occurs in a dynamic balanced fashion
(incorporating both quantitative and qualitative aspects).
And true mastery of the zeros for
the Riemann zeta function would greatly facilitate similar mastery for the
zeros of related L-functions.
No comments:
Post a Comment