Contents.
Please see also:
"Why Seek Frequency Domain Continuum Models?".
** Summary
This document seeks to reconcile certain types of differences of view which are often apparent between physicists and cyberneticians. To this end it proposes and discusses certain ways of making mental models. By way of an example it charts the path for a demonstration of applying the method to the area of the physics of low energy electromagnetic and electronic interactions.
With regard to making our own observations we are often naively inclined to an attitude in which we presume that "seeing is believing". The resulting formalisation is what we call "objectivity". However, certain inherent constraints regarding observations act to make this stance too simple as a model of any reality of which the observer must be a part.
First we may lay down some imperatives regarding observation:
a) Observation seen as a process fundamentally and essentially entails amplification.
b) Observational results and observer self identity are only able to exist in thermodynamic terms.
c) When sufficiently small entities are considered then we are bound to invoke models which entail conservation as a basis.
Next, regarding information, being the results of observation:
1 Information can only be represented by thermodynamic parameters of segregated modal ensembles (including the single mode as a special case) and,
2 Information can only be amplified in its expression (propagated to plural observers) by certain types of processes acting upon thermodynamic ensembles.
Given the necessary form of the observation (process) it is inherent that a quantal or a noisy appearance will arise in nature. This is in the nature of a theorem, and needs proof.
** About the Document.
This is a discussion of and proposal for certain ways of making mental models and an example of developing the method in the area of the physics of low energy electromagnetic and electronic interactions.
This is a developing essay with hypertext form, intended mainly for interaction via the Internet. It is liable to change from time to time when effort is available or when criticism demands, hopefully to improve it. It is not a learned essay in the sense of claiming comprehensiveness in relation to prior work. Indeed it is intentionally far from that, because it seeks exploratory clarity before rigour! However, to avoid total departure from intellectual reality and convention I am trying to introduce references that are balanced and representative so as to link into the world of formal study of the respective areas, and sufficient to form an outline of foundation.
Although the subject is that of making convenient models, this essay is more concerned with justifying some new models than it is with simply expounding them. Please try to forgive me if some of the arguments I use for this purpose are less than clear. This is a cybernetics issue, and unfortunately the modelling of model making is a rather more novel and tricky affair than the models which are the objects of its concern.
** Physics and Cybernetics.
There has arisen a tiresome and mystifying disparity between the views of nature expressed respectively by physicists and cyberneticians. This has been particularly pronounced in the latter three or four decades as cybernetics has developed its "second order" view in which processes of emergence and observation themselves come under scrutiny. Whereas the physicist follows a long established tradition of ascribing the emergence of form as observed through the results of experimentation to the phenomena of an external and universal reality (thus taking an epistemological stance) the cybernetics oriented thinker is inclined to recognise the self assuming role of the observer as an essential part in bringing about the form of the resulting observation (producing a sense of ontology). As an example there are certain issues of how the present day mainstream of quantum mechanics is constructed conceptually.
The subject is made the more interesting because this sort of disparity is not confined to the differences between physicists and cyberneticians; it is typified by, and perhaps easier to demonstrate in these two mind sets. As a comparable example, the abiding tension in the difference of views between the "algorithmic AI" school of thought and "cybernetics of emergence" shows similar features. The objective here is to explore ways in which we might show that these antagonistic sets of ideas are in some sense views of the same thing.
The matter of how we handle the interplay of discrete and continuous conceptions of things is closely bound up with this difference. Are we to think in terms of nature chopping things up before we see them or should we allow that the presumption of our own identity as observer is an essential part of the chopping up process? It is desirable that we should as far as possible reconcile these two views.
Some may say that the distinction does not matter in any case, so long as the resulting models show equal or comparable power of prediction. However, even if they are equal (and it would be nice to prove that in each of the important cases!) there is also the issue of how such models affect the thinking and idea creation processes of their human users. The effects appear not to be neutral in this respect. Differences of view are deep seated; schisms are often manifest; people retreat into their conventional mind sets to avoid the stress of the differences. Thus so long as we do not reconcile these issues in public we are prone to some form of recurring tiresome intellectual animosity.
** Preamble to the Development.
With regard to making our own observations we are often naively inclined to an attitude in which we presume that "seeing is believing". The resulting formalisation is what we call "objectivity" whereby we limit our belief to those phenomena that submit to replicable observation, replicable that is under some set of more or less definable conditions. We then exclude as meaningless all phenomena which we cannot submit to this formal process for their observation. We proceed to reduce uncertainties and attempt precise definitions of things through infinite recursions of hypothetical observations, but there are highly significant cases where the replicability is not available and also those where the infinite recursion does not converge to a single result. In fact it appears that to some extent all observations are inherently subject to these tiresome forms of limitation, and we must and do make allowances for the consequently less than perfect value of the results of our observations.
We might alternatively take a somewhat more circumspect stance at the outset - "seeing is not necessarily believing". After all, neither the conditions under which an observation is made nor the boundary between the observer and the observed can ever be totally defined. The determinations of both the conditions and the boundaries themselves involve indefinitely large amounts of additional observation. Accepting such limitations we might choose to study the nature and limitations of the process of observation itself, and thereby aim for a more flexible method and deeper understanding in assessing the limits of our observations. That is my objective here.
Because of the way that this latter stance makes us consider the uncertainties of observerhood it carries with it a sense of appreciation of wholeness involving both observer and observed, where the sense of interaction being bidirectional, even though asymmetrical, comes increasingly into our view. These sorts of changes of view come about regardless of whether the observations we consider are the sort involving a few large interactions of macroscopic systems and parties or involving the myriad interactions characteristic of microscopic systems. Also they come about whether we choose to think of ourselves as party to the observerhood or as conjecturing or observing some other process of observational interaction.
Alternative models of any given thing, even if they evaluate to predict identical consequences, may not be equally useful for the human mind (nor, in yet other ways, for use in computational processes). Increasing depth of deductive structure generally acts to oppose mental clarity in a model so the careful choice of abstract notions and their deductive relationships is important in building our mental models. Thus there is appeal in a type of model which is internally elegantly structured, rather than being just a predictor of results with external (behavioural) structure codified by a symbolic scheme. This is to say that we often find it more fruitful to think and talk in "process synthetic" terms rather than "consequence analytic" terms. All models and symbolisms are subject in some degree to the synthetic nature, but those in which process is made more explicit give more ready guidance to the potential and limitations of extrapolated applications .. i.e. to the potential for creative use. Such a model must in any case be constrained as either complete within its respective range of application or else have adequately defined limitations to its validity.
The models developed in the early stages of new work usually need to pay particular attention to content. They require above all comprehensiveness and accuracy and will be used only by persons immersed in the subject. Conversely models for later development are likely to need form which is capable of inhabiting the minds of persons entering the field at any arbitrary point and with different backgrounds. These are substantially different requirements. This accounts for the disparity which already exists between the sorts of models used by physicists and those dealing with similar issues but outside of physics. Of all the disciplines (except perhaps, depending how you think of the words, for religions!) physics seems to be innately and to a unique degree concerned with the reality of the observed, and as such it produces models which are not very convenient to use in the sense I have described. There are also signs as of this date that physics as a sub cultural activity has a tendency to protect as sacred the rather arcane models which result from its pioneering investigations. Mathematics could be thought to have a similar nature, but because its domain of attention is the human mind and its realities are mental it has common ground with many other pursuits and disciplines. This coupled with a whimsical sense of elegance seems to produce less problem with the mathematics sub culture in this respect.
The motivation here is partly a wish to help with the application and development of low energy physics theory and partly a fascination with ways of making better models. I am concerned with the prospect of re-expressing some basic physics models to see if a form more suited to ongoing development and application can be established. The field of concern is that of low energy interactions and is covered mainly by the so called "Quantum Electrodynamics" theory (abbreviated to QED) as applied to electromagnetic and electronic phenomena.
** Charting Development of the Model.
I start here by placing the process of observation at the centre of the field of play using a principle I like to refer to as the recursive observation lemma:
Recursive Observation Lemma: Any general theory of observable reality that is valid for an observer must be valid in accounting for the processes in an observed observer.
Skipping, for the present, over any sense of paradox which this lemma throws up, we next recognise amplification as a fundamental issue in propagation of influences in the form of what we call "information". This gives rise to some features in the nature of observation which for our purposes of development here we might elevate to the status of axioms. A pair of linked axioms, being mutually corollary, can be written as:
a) Observation seen as a process fundamentally and essentially entails amplification.
b) Observational results and observer self identity are only able to exist in thermodynamic terms.
Next introduce a necessity brought about by any attempt at fine or microscopic modelling:
c) When sufficiently small entities are considered then we are bound to invoke models which entail conservation as a basis.
Wherever these entailments come together in a reductionist method we are led to lemmas about information:
1 Information can only be represented by thermodynamic parameters of partitioned modal ensembles (including the single mode as a special case) and,
2 Information can only be amplified in its expression (propagated) by certain types of process acting upon thermodynamic ensembles.
Given the necessary form of the observation (process) it is inherent that a quantal or a noisy appearance will arise in nature. This is in the nature of a theorem, and needs proof. It says that even if the process were continuous it would still have to appear to produce quantal or uncertain information results as seen by an observer whose existence is (necessarily) only expressible in thermal terms.
This rule (uncertainty as an imperative) will also apply to the structural concepts such as atoms. Can we justifiably use the "synthetic" atom construct in the proof of the electromagnetic and electron phenomena? Once justified this would set the scene for a sort of induction procedure taking the same method further into nuclear physics and quantum chromodynamics.
Regarding amplification, there seem to be two scenarios:
(i) continuous - linear, ... thermal noise.
(ii) event resolving - nonlinear, ... quantal noise.
Correspondingly introduce two concepts of process each of which we term as "complete" if it can be used recursively to achieve indefinite degrees of amplification for only limited spoiling of the information. One we might refer to as "continuous complete" and the other as "quantum complete". These two notions are interestingly asymmetrical in that the quantal process destroys more information regarding the observed state than does the thermal process. As a result if the two processes are causatively cascaded then depending which process comes first so either more error (Q first) or reduced information space (C first) is produced.
One line of proof comes from first postulating a process of propagation (amplification) of detected information via wave continua of a very general sort and which permits extrapolation to indefinite degree of amplification (some sort of logarithmic cost in accuracy to achieve this infinite extension). Using such an overtly continuous model it is then required to show that no model of amplification could meet the completeness requirement unless it yielded limitation on observations to being either noisy, quantal or both. The possibility of proving this looks promising using bifurcation concepts and theory as applied to the domain of continuous modal excitations with the action of weak energy transferring couplings (consider Rabi effects in Bloch space).
Notice that once this point is accepted then it simultaneously debunks both quanta and continua as the fundamentals in nature (perhaps uncertainty and noise may survive better in this respect). However it acts as a license for the use of continuous fundamental models to the extent that in recognising that no continuum could ever be directly observed, the models remain as essentially mental devices. Its qualification is not as a reality of nature but as a convenience of expression and explanation, and then only in so far as it can be shown to be so convenient.
By the way, any attempt to defend as primal the quantal basis of modelling by claiming that it avoids reliance upon continua will not work because all of the conventional quantum theories rely in essence upon a continuum for deduction of probabilities of quantum detection/observation. Presumably working in the opposite direction there will exist a dual of the "amplification completeness" proof described above, but as I see it so far the ways in which these oppositely directed arguments develop are in various respects tantalisingly different in effectiveness of modelling. Of the two the continuum model has the greater flexibility of mental handling and practical application. The step I am proposing is not, in terms of fundamental theory, a radical one, but nevertheless it might mean a great deal for comprehension and creativity.
My thesis here then reduces to saying that continuum models are convenient (there is an asymmetry of convenience), and that this is particularly true with regard to thinking, as compared to expression or computation.
** References and Background Reading
[GSt93]
|
G.Sterman "An Introduction to Quantum Field Theory." Cambridge University Press. 1993 ISBN 0-521-32258-8 hardback, ISBN 0-521-31132-2 paperback
| |
[HSt93]
|
H.P.Stapp "Mind Matter and Quantum Mechanics." Springer Verlag. 1993 ISBN 3-540-56289-3, ISBN 0-387-56289-3
| |
[LAl87]
|
L.Allen & J.H.Eberly "Optical Resonance and Two Level Atoms" Dover. 1995 ISBN 0-19-509345-3 Originally published both by General Publishing Co., Ontario and Constable & Co., UK 1975.
| |
[SAu95]
|
S.Y.Auyang "How is Quantum Field Theory Possible?" Oxford University Press. 1995 ISBN 0-19-509345-3
| |
[TMa]
|
T.W.Marshall "The myth of the photon"
Published in "The Present Status of the Quantum Theory
of Light", ed. S. Jeffers et al., Kluwer, Dordrecht, 1997, pages 67 - 77.
Trevor W. Marshall, Dept. of Mathematics, Univ. of Manchester, Manchester M13 9PL, UK
Emilio Santos, Depto de Fisica Moderna, Univ. de Cantabria,
39005 Santander, Spain | |
[VGo]
|
V.Gomer & B.Ueberholz "Single Atoms in a MOT." University of Bonn web site
http://www.uni-bonn.de/iap/l_m2.html
or see papers at http://www.iap.uni-bonn.de/ag_meschede/papers/pub2000.html
|
For a basic web text on quantum mechanics (this attractive text is provided by The College of Saint Benedict and Saint John's University, MN) see:
For some important semiconductor theory wrapped up in a bit of fun see:
|