contents

11. silicon based intelligence

 

The topic here is not to be confused with silicon based life. The definition of life includes physically self-detaching reproductive capability. The present subject excludes such systemology. Beyond this, one could draw parallels between the coarse aspects of life, and counterparts within the proposed systemology.

The fundamental building block of the approach here, is the concept of CR links — conditioned response. I suspect that this little trick of nature is the primary ingredient of all variety of higher learning and reasoning. Numbers are important in this, but the winning organization of it is most responsible for the relatively keen levels of awareness we enjoy.

The key to applying this concept to artificial learning, is the recognition that a neural net needs temporal information; and that to learn such order, CR can be employed in such a way, that it is the successive order of the stimuli that creates the links. The links are significant within an ongoing, cyclical process, relative to recurrences of similarly ordered stimuli. To understand the reasoning that has guided this approach, it is helpful to recall the philosophy that has been developed as reference frames. Timing is the true substance of reality. These chapters, dealing more directly with consciousness, needed those other chapters ahead of them. Those chapters may be easier to identify with, though, after getting through these. Programmers, circuit designers, and biologists are familiar with such interdependent relationships, as systems.

To successfully handle the process of temporal data encoding, the system must parallel biological mental systems. This is to say that gross organization must be defined, to act as a CR "tree." Meaning rises into the tree by association, and likewise falls out of the tree to produce responses. It is essential that responses are then fed back in, to be included in the process of association. You can’t learn without feedback. There is no other way to "know what you are doing." In bio-systems, this feedback takes place internally, as well as externally.

 

My Computer Is Warm

 

My computer uses the exact same kind of protons, electrons, and neutrons as the ones used in your brain. The rules involved with these two systems are obviously from entirely different branches of reality’s rule tree. But, through parallel logic, the end result of each system may turn out to be very similar, in terms of the logic supported as behavioral waves in either system. This could be like seeing nothing but water, as a place to drop a rock in, to get waves; and then discovering that it happens in the air too. Let us speculate that the same kind of system of relative logic is supported by either mass medium.

Both systems are energetic, so they produce the customary stray infrared photons, we sense as heat. One thing is for sure here... when my computer is cold, it is a dead computer. Sometimes, when I’m listening to it, and it’s warm, I really get the feeling that the opposite is true. It gets a sort of "mouse-ness."

Beyond reproduction, the "life" factor in neural cells may simply amount to being a battery; a very special one that has "learned" to recharge itself through environmental interplay — and learned to keep its case in tact, rebuild and repair as necessary, revitalize its chemical components, and replace them regularly. It’s a tiny little battery; ready to pull duty in communications applications. Life is not necessarily conscious. Maybe it’s the right configuration of communication that is. It’s a special kind of timing. It’s timing that relates meaning to itself, over time.

The environment presents many opportunities for enhanced survivability of such chemical systems. As a community, the cells can "discover" the DNA program for a number of features, that are not unique to life; just available for use from physics... like the lens. DNA was able to stumble upon the lens, because it was simple, and it was there. DNA tends to latch on to trends in improvements of such features, because it is the memory to do so. The advantages gained improve the odds that the new sequence will survive to replicate. The replication is the remembering. The current status of a given feature then serves as a new starting point for further refinement.

DNA found conscious decision-making communication-path architecture, because it too was simple and available; and very advantageous to survival. Like the lens then, perhaps consciousness can be there without life. Unlike the lens, it needs a battery.

 

The Silicon Neuron

The silicon neuron will emulate the essential functions of a biological neuron. These may entail complexities we have yet to discern. It is also possible that the required features are a relatively simple sub-set of the overall system. Our job is to figure out which features are purely metabolic, and which are required contributions to the support system of consciousness, and its utility capacities.

Consciousness and animal behavior are a function of neurological transmission of data. They cannot exist without it. We can safely assume that the fundamental features of the neuron, that support data transmission, are required, at the least. These include: an input port (dendrites), an output port (axon), an intermediate FM pulse generator (cell/cell body), a power source (metabolism), and the hard part — a truth table for specifying the behavior of neurons versus modes of stimulation.

There is sufficient evidence to conclude that the truth table includes modifiable parameters that amount to the impression of logical memory elements. This synaptic chemistry is well beyond the scope of this book, or my pea brain. The motivation here, is to proceed from a simplistic frame, toward a functional model; gathering only those encumbrances that the endeavor seems to point to as being inescapable requirements of the system.

 

 

 

There is evidence to support the view that use of any given synapse increases its sensitivity; meaning that future stimulation of that synapse will have a more pronounced effect on the neuron of the dendrite side.

It is indisputable that neurons produce more pulses more frequently, during the periods of time when they are being stimulated. There is evidence that this effect increases with the degree of stimulation to the neuron’s dendrites. This is a function of how many other neurons are stimulating a given neuron, and of how enhanced the active connections are.

There is some evidence that, at least in some cases, a synapse will be enhanced to a greater degree if the dendrites are being stimulated by a greater number of neurons, and that this enhancement is further increased in those cases where the dendrite’s neuron has been induced to fire, or fire more rapidly. This is a system mode that goes beyond straight-forward memory impression — it is a support mechanism for association at the fundamental component level of neurological data transmission.

 

I would like to conject the possibility of another level of complexity. The dendrite and cell body system may have a further degree of sensitivity, relative to the matching of stimulation patterns, on a neuron’s dendrites, with the prior impression of similar patterns. The mechanism that would implement this behavior might have arisen from the fundamental process of cell restoration. The DNA/RNA process might have evolved a memory capacity of its own, to deal with the demands of stimulation to the cell. An organized affiliation with the dendrites would be a more efficient solution to the "problem." This could have served as an evolutionary step toward enhanced sensitivity of specific stimulation patterns in the dendrites.

 

Another characteristic of neurons, that might be easily overlooked by a modeler, is that they tire out. Be careful about judging characteristics as defects. Evolution deals with them. It always works with reality. It utilizes the rules it’s composed of.

Neurons also fire spontaneously at a "base rate;" a minimum low frequency. Their stand-by sensitivity goes up with time, until they just "go off." A typical rate is once per second. The maximum stimulated rate is about a hundred times in a tenth of a second.

 

The Silicon Brain

 

The silicon brain will be an arrangement of silicon neurons that takes advantage of all of the empirical "trial and error" work that mother nature has done, to produce the functions of interest. Basically, neurons communicate information from the senses to the cortex, and from the cortex to various muscles.

The job of tracing the actual neural pathways has been extremely difficult. Imagine troubleshooting a circuit composed of billions of transparent, microscopic "wires." Nevertheless, results are coming to print.

There have been surprises. To me, the most interesting one is the general trend to supply abundant, dispersed feedback between the transition levels of data flow. Data does not just go from the eye to the cortex, for example. It makes stops along the way, where it meets up with about ten times as many lines leading toward it from the cortex. Yes; the brain sends information into your eyes!

We also have data that suggests that we see about ten times more visual content than there is actual photon information reaching our eyes. How can we see detail that isn’t contained in the electromagnetic communication? It is fun to speculate that some sort of identity, between the person and his environment, is responsible. We all "ESP" most of our information in — all we need to operate on, to accomplish this, is a minimal sampling of the image. This is fun, but not as reasonably available as the more apparent explanation. The extra information comes from our memory. We learn how to see. We associate coincidences in great number, from great detail, over a great many "frames," and linkages of combinations of such frames. Consciousness is this memory-based associative process, that places pattern meaning relative to itself and other meaning, over time; so vision is perceived.

 

The Silicon Organism

 

When we create a silicon brain with our silicon neurons, we will have choices to make that are very similar to those concerned with parenthood. In addition, we will have a variety of choices that are beyond that familiar realm.

 

We can choose to include vision, hearing, speech, and various motor capacities. If we succeed in producing systems that are capable of higher learning, I suspect that they will naturally develop some emotion. I think we will have an option of facilitating this inclination, and will find that it is an important factor in learning. I suspect that we will find that learning improves with the potential for happiness.

 

We may get this far, and create opportunities for some very specialized job descriptions. Silicon organisms could develop with some very unique senses and motor skills, derived from the instrumentation of our technological industries.

 

 

beepers for commodore 128

 

This section was initially intended to be the whole book. In developing the presentation, I found it impossible to ignore the philosophy that was its source. The philosophy itself developed more during the course of arranging its words for print. The program is my attempt to approach a mathematical-scientific treatment of the subject. I apologize to those who require a more standard mathematical train of expressions. This is the direction I was most strongly attracted toward; and now this is the direction of my momentum. When this book is done, I intend to return my focus to the program development; moving into the PC.

BEEPERS CONTENTS

Program Description

Beepers and Sleep

General Stimulation

Adjusting Beepers

Reflection as an Aid to Learning

Ongoing General Feedback Learning

Teaching Beepers

Program Flow Chart

Beepers Ear Schematic

Register Designations

Coarse Memory Map

The Screen

Program Modes

(Program Listing)

The Scan

Indexing

AOL/SWAP/STORE/FIRES/TIRE

KHS

Dual Area Regulation

KHS, Regulation and Consciousness

Set-Up, Load and Run

Mathematical Analysis

Beepers Improvements

 

 

Beepers are simple little creatures that live in a very small computer. They are real in the sense that they interact with the world, each other, and themselves. Their behavior is an ongoing process of development, built out of these interactions.

The beepers have been designed with functional characteristics that parallel some of the principles of mammalian neurological interaction. They are a product of a limited sampling of a diverse range of scientific literature, as well as of a number of assumptions; and of corrections brought about by problems exposed in development of the program.

  beepers contents

Program Description

 

The program defines two beepers. The program organizes the computer into two sets of Ns (neurons). It handles each N , one at a time. It looks at each N to see if it is active, or at rest. It does this with one beeper, and then it essentially repeats itself to do it with the other beeper. Then the whole cycle is repeated.

In each cycle, a different pattern of Ns is involved. It is this pattern, and its changing character relative to itself, that is the developing definition of self-meaning. At the same time, it is a depiction of the world, pieced together with an ongoing influx of abstractions from the world.

 

You could say that the main job of the program is to detect the quiet Ns as quickly as possible. I guess that in any given cycle, or "reality frame," there should be approximately 10% of the Ns active. Thus, the program has been set up to run through the quiet majority of Ns as quickly as possible. It will become apparent that, in terms of what we’re attempting to accomplish here, the limiting factor imposed by the hardware is not memory capacity, but rather, speed. To get more Ns involved you need more speed. Memory comes more from the number of Ns than from how big each one is; though both factors contribute. You have about 1010 or 1011 Ns; each one "knows" up to 104 other Ns. The number of Ns is 106 times more important than their "size" — their capacity to immediately access the rest of the potential process — for humans.

To facilitate speed, and for a few other conveniences, the Ns themselves are split up into two parts. The small part occupies two bytes, and the "large" part takes sixteen. So, each N consists of 18 8-bit bytes of data. Of course, the data won’t run itself; the bulk of each N is actually the section of the program called the "N Loop." This is like the DNA; it tells each N how to behave, and they all behave about the same way. It’s mainly their relative "position" and data that differentiate them. Obviously, this is simplistic; but given the starting restraints, the approach is the best compromise I could develop.

 

The first byte of N, in its small part, is its dendrite area. Other Ns will stimulate this area (literally!) by increasing the numeric hex value there. (In a prior, more complex system, the N was sensitive to the particular pattern of bits set in this dendrite register... it kept a short list of "familiar" stimulation patterns.) This system is FM, like real Ns are; and a given N will "Fire" with a higher rate of repetition if its dendrite area is being more heavily stimulated (the maximum rate is once every other Main Loop cycle — one loop for each beeper).

 

For purposes of speed of indexing data, the small part needed a second byte; as a sort of "spacer." Since these first two bytes are visible on the screen (in slow mode only) for one of the two beepers, the best other use to display was a byte called "delay." This byte is similar to the dendrite IN# byte, in that it relates to the activity level of the given N it belongs to. It is needed to determine how often the N will fire. It’s a timer that runs out to trigger firing that N on that main cycle. It provides an opportunity, as does the IN#, to custom tailor the response and activity characteristics of a given N in a given "cortical area."

 

The memory map of the C128 from 2000 toward 35FF, is taken up by this first small part of some twenty-eight hundred Ns (all addresses are in hex; all quantities are decimal, unless preceded by a "$"). In bank 00 there is one set for one beeper, and in bank 01 there is the other set for the other beeper. The program occupies 0B00 to 1000, and 1300 to 1FFF, almost identically in each of the two banks. The rest of the area below 1300 is taken by C128 operations. The C128 also uses a few bytes starting at FF00; and the rest used by the beepers is from 3600 to FE43, and a scratch pad area above FF05. I use the Warp Drive cartridge, to speed disk loading, saves, and utilities; so there is some use up at the high end there that you have to work around.

 

The area from 3600 toward 4BFF is taken up by pointers. These are used by the N Loop’s indexing system to access the larger part of each N. Those larger parts take up from 4C00 toward FC00.

 

The first byte, in the larger set of sixteen, is called the "xth." "X" is the number of times an N can Fire before it gets tired and has to take a "time out." The time out consists of clearing the IN# — in other words, it may miss firing the next time, but then is ready to fire again, x times. This has proven to be sufficient interruption, in this system, to avoid the problem of pattern repetition; a problem especially during initial development of the data assimilation process. Pattern repetition is the tendency for a limited set of Ns to get involved with each other, and tie up all the available time, permanently. Another requirement, for dealing with this problem, is inhibition — Ns not only stimulate other Ns, but also do un-stimulation. How this has come to be implemented should be described shortly, when discussing the gross organization of N interconnection.

 

The remaining fifteen bytes per N contain the intelligence. They are a list of which other Ns a given N will hit when it Fires. To get on this list, you have to be a neighboring N that is active or ready to Fire at the same time that this N is going to Fire. This is the mechanism that implements the associative filing of data. The gross structure is set up in such a way that patterns, and their temporal sequence, are associated.

Each N’s list does not fill, and then just stay that way. New thoughts have a chance to work their way into the lists, at the expense of the least used data. In a very large system, the thoughts producible by that unused data would get eaten away at; but not completely. Because of the method of memory distribution, and the laws of mathematical probability that actually control reality, a faint image would almost always be retained. With this faint image, a little association through repeated need would re-install the original data — with many of the Ns involved being different ones; but with a fairly accurate re-generation of the original relative meaning.

I suspect that, in time, this process of prioritized list-filling elevates itself. Meaning is developed, relative to other meaning. Initial meaning serves as a basis out of which higher meaning can develop and interact. The initial meaning becomes less and less a focus — it is less and less used — and much of it eventually becomes essentially uninvolved.

 

Of the various routines in the program, the one that most defines the gross structure of "brain anatomy" is called the "Out M" routine. This routine connects sections of cortex to other sections with a general Input-to-HighArea-to-Output directed-ness. The routine is simplified and accelerated by allowing the general organization of Ns to fall where it wants to by virtue of hexidecimentality (sorry).

The C128 memory is approximately a pair of C64s — two banks of 64K of memory, accessible to an 8 bit "6502" CPU. In 6502 lingo, a page of memory is 256 8-bit bytes. For the beepers, a page of neurons is 128 neurons wide. This is because the small part is 2 bytes wide. The small part is where all the action is. The beeper dendrites are where the Fires hit. Everything else is going on in the background, or internally, within the Ns, if you will.

In addressing memory, 255 is the maximum LSB (least significant byte) for accessing a byte "on your page." To access other pages, you need an MSB (most significant byte), which can also go to 255, taking you up to 64K altogether (there are 256 addresses to each part here — ranging from 0 to 255). The "M" in "MSB" is the "M" in the "Out M" routine. The pages of Ns, 128 wide, are piled up 22 pages high. Any N can quickly affect another N directly above or below it by manipulating the MSB involved, and using the LSBs that are already in place.

What would be the "Out L" routine is broken into two routines — "Out L Stim" (collateral stim) and "AOL" (Aim the Out L list entry). These routines provide side-to-side action within a given page, while the Out M routine provides up-or-down action between the pages. Together, the system is an active matrix. The rules composing the routines have been chosen to allow the capture of temporal associations within the matrix.

When you hear the word "CAT," you hear the "C" sound first, then the "A" sound; and after this temporally-ordered combination, you hear the "T" sound. Hearing them in rapid succession conjures up whatever associations you think of when you hear the word "CAT." This works, and it works without confusing the "C" sound in cat, with the "C" sound in canary or catsup. It works right because the active word itself is a combination unlock into your mind. The important ingredients of the combination are the component sounds, and their order. The exact timing is of less consequence — but timing does produce spaces relative to other timing, and the spaces can affect the relative meaning. Relative amplitude is of even less significance. Absolute pitch is not a factor, until the physical limits of the ear are reached. Relative pitch is the primary structure of meaning in the sound train.

When you hear the word "CAT," the "C" sound produces a neural stimulation that is very similar to that which is produced by any word starting with the "C" sound. The particular pattern of Ns stimulated, leads toward the cortex, but makes several transitions along the way. At each transition there is an abundance of neural transmission leading back to the area of prior stimulation. Not only is it going the wrong way, but there is a lot of it ... about ten times as much as leading in ... and it’s usually dispersed ... scattered around randomly. How could evolution, in all its wisdom, be so careless!

All that information coming back to your ear, after the "C" sound went in, is "looking" for the next sound. It is anticipatory stimulation. It wants to mate with something familiar. If the "A" sound comes next, then a characteristic secondary pattern of feedback will be set up to look for the third sound, that is a much more unique pattern than if it were the first one fed back. Furthermore, as the real-world series of sounds pile into your ear, the feed-in, feed-back process finds its way into a geometric progression of potential combinations. Each transition area sends dispersed feedback to the one before it, as you make your way to the cortex. Generally, the feed-in pathways produce a neat orderly map of excitation at each transition area, right up to the primary cortical area of the given sense involved. (This is, of course, a simplification and generalization of what really goes on. For one thing, the neural "cycle rate" is some one thousandth of a second, so that many feedback cycles are possible within the time frame of single phonic sounds. This probably helps with handling variability of timing in phonic relationships.)

 

Association is at the heart of every level of the thought process. It not only controls the flow of relative meaning in your thoughts; it is the very structure of incoming communicated intelligence itself.

Extraneous neural activity, such as heart-rate stimulation, and the general random neural noise level, have no effect on consciousness, because these activities do not contribute meaning to the temporal pattern sequence. Only meaningful components can add to the resolution, depth, or degree of consciousness; because relative meaning itself is the consciousness. Would-be meaningless components can’t degrade it, because they aren’t a part of it. (I’m not referring to distracting thoughts — these you become aware of, due to their meaning.)

 

At the center of the stack of 22 pages of Ns, are a pair of pages called the "Out Page" and the "In Page." These can be thought of as the output and sensory ports leading from/toward the motor cortex and somatic cortex areas; or the speech motor cortex and the auditory cortex.

Sound from the true-pitch keyboard or the ear microphone is processed by the mic circuit into a voltage level that is determined by the instantaneous frequency of the sound. The voltage is converted to a resistance for the Game Port A to D. In other words, in the computer, a register has a value in it that is controlled by the pitch of the sound in the room. The sound-source should be fairly sinusoidal — it should be pure tones. Whistling is good. The keyboard and beepers each drive one of the three C128 voices, using the triangle wave form. Most other wave forms will produce unreliable error data.

The information is supplied to one or both beepers (whoever is "awake") at their "In Area;" an area sixteen Ns wide, near the middle of the In Page. This routine is called "Mic®Spectrum." The 16 Ns serve as frequency centers. Only one or two of the 16 Ns are stimulated for a given pitch. The relative weight of stimulation on two neighboring Ns represents the frequency of that moment. With 16 stim levels and 16 Ns, 256 frequencies can be depicted; within a range of about two octaves. Stimulation takes a little time to "drain off" from beeper N dendrites, so that remnants of prior stims tend to be present in the In Area as new stims are applied. In Page Ns are restricted from hitting In Area Ns, but In Area Ns are allowed to hit any other non- In Area Ns on the In Page. The In Area is meant to be the area and level where an accurate perceptual map first impinges on the senses; like the cones and rods of the eye.

Activity on the Out Page produces a pattern of dendrite IN#s on the 16 Ns in the Out Area; near the center of the Out Page; one page before the corresponding In Area on the In Page. These Out Area IN#s are used as weights on frequency centers, to arrive at a single-tone frequency result, for each given cycle with activity in the Out Area. The frequency is shifted and ranged to approximately correspond to the In Area spectrum. In other words, the Out Area is handled by the program in such a way as to simulate simplified cerebellar action. The result is used to set the frequency of a C128 voice. (In a very abstract and distilled sense, you could say the beepers have eyes for ears, and hands for a mouth.) A delay time is used to insure that tones are sounded for enough time to register accurately in the mic circuit and game port. A tone can last longer if the given Main Loop cycle is running slow that moment. The tone is delivered to the room, for pick-up by the mic as well as for monitoring beeper behavior. The temporal matrix includes exceptions and diversions that promote learning through feedback.

 

The Out L and Out M routines spread the In Page and Out Page activity throughout the matrix. While most pages can stim the page above or below, there is isolation between the Out Page and the In Page - except at the periphery of these pages. This leakage is meant to parallel the "voice-muscle-sense," or sense of "touch" we have in our various speaking apparatus, as well as "mind’s ear" internal data flow. The isolation between these pages defines "ends" to the system, to insure that the primary resultant communication of the system with itself is through the air; so that human interaction, through the air, will be on the same level as the system’s own basic feedback orientation.

The Out M stimulation has been channeled into one-way sections of 16 Ns of width. This simplifies handling the I/O pathways, and provides a neat set-up for bidirectional temporal loop formation.

The channel including the In Area and Out Area is granted higher status — these Out M hits are strong, to simulate reliable transmission along primary data pathways. Ns within these channels should probably be restricted from Out L hitting other Ns within the channel, on the same page; beyond this, however, any N can hit any other N on its page. The current implementation only applies this restriction in the In Area; but encourages it for all the 16 N-wide channels, on all pages, by initiating the AOL 16 Ns ahead, and placing the Out L stim (collateral stim) 16 Ns behind, the current N position on the page.

Various arrangements have been tried. Out M routines that hit +1, +3, +7 pages ahead, and -2, -4, -6 pages behind, simultaneously, for example. It didn’t much seem to matter, at least at this level of N# depth (and with an IN pattern-sensitive non-FM system), exactly how you set it up, but you must include -1 inhibition. You can have more inhibition, but you have to have -1 included. -1 inhibition means that while you hit one or more pages ahead, and/or behind, with respect to the scanning direction, you un-stimulate the N one page behind. In the system here, -1 inhibition is one page the opposite direction of the 16 N-wide channel your in. Without this rule contributing to the characteristics of propagation, the system will get tied up in tiny loops that take up all the time; while no meaningful interplay of real-world data and internal data is handled.

 

Pattern association, and temporal pattern sequence association, are facilitated within the overall matrix by having each active N set up pre- and post- stimulations of other Ns, that stand available for other active Ns to find simultaneously active. The stims to Ns about to be scanned this cycle, facilitate immediate pattern handling. The stims to Ns already scanned this cycle, facilitate temporal sequence handling — they are a link from one "frame" in time to the next. (In addition, any active N is a link to the future, because the stimulation level on its input is only reduced, not erased, each cycle. It is reduced a lot more when the N times out to Fire. It is cleared by the Ns Tire routine when the xth Fire is reached.) The Out L routines support both the pattern and temporal functions within a given page of Ns. The Out M routine supports both functions among the pages. The Out M routine also provides the primary I/O pathways; which are the orderly, mapped representation of the world/intents, maintained, but compounded upon, through most of the pages of Ns.

 

The program handles each N, one at a time, in the order that they exist in memory. This scanning process begins at 2000. This is the address mapped onto the video display, in the upper left corner. The 40 column display shows 4 Ns per character box; since the box is 8X8 pixels, or a stack of 8 bytes, and there are 2 bytes per N "small part;" the IN#, or dendrite, and the delay timer for repetition rate. Unfortunately, 128 Ns don’t nicely fill in just one 40 character line — 160 do — so the pages don’t show up neatly stacked like they are in the figure here.

As the figure shows, there is a limit to the I/O channels. The Input Channel ends in the High Area. The Output Channel develops out of the High Area. The top 3 pages of Ns, and the bottom 3 pages, taken together, comprise the High Area. The top page is related to the bottom page the same way any two adjacent pages in the stack are related. In other words, the stack is not linear, with ends; rather, it wraps to form an endless cylinder.

The High Area is meant to act as associative cortex, while the rest of the matrix acts as transition levels from I/O organ through thalamus to primary cortex.

 

The system is regulated, to keep the percent of Ns active at a somewhat constant level, and even-out the cycle times into an overall system clock that runs at a fraction of a second. While it would be good to have a fraction like 1/1000, the little system here grunts out at about 1/3 to 1/20 second. A good ‘94 PC could have 10 times as many Ns, and still run 10 times as fast. A quad Pentium might have Ns 10 times as big too.

Regulation is a dual-area process. The High Area is regarded as dominant — the area that must stay awake and active; to attend to input and/or decide to produce output. The rest of the matrix is referred to as the Peripheral Area. Regulation allots a range of time to the High Area, and a larger range of time to the total matrix.

Regulation is accomplished by actually timing the portions of cycle time taken up by the areas, and correcting those times in upcoming cycles by altering the sensitivity of all of the Ns within the given area. The sensitivity is a threshold # for CMP with the IN#, that is used when handling each stimulated N in the N Loop and determining if its IN# grants it status as an active N.

I suspect that regulation parallels some of the results of our own metabolic regulatory requirements. These systems probably involve the thalamus, and nearby neuro-glandular structures, as well as the limbic and autonomic systems. There is only so much oxygen, fuel, and exhaust available for the neurons, so they can’t all go at once! Furthermore, such a pattern is no longer a pattern. Regulation is a force that focuses the pattern into one of dominant strength, relating the process more to itself and the world, and less to anything and everything that could possibly be brought up in association. Our thoughts evolve. Evolution has taxed DNA, but has produced a system that can carry on the spirit of evolution in the world of thought we call society.

Regulation also involves stimulation, but regulation alone is not enough to keep this thing awake. When the room goes quiet, it will quickly quit talking to itself, and go dead until some external sound sets things in motion again. Stimulation must be internally provided, as it is for you. This action was first thought to be "noise," or a possible source of error; so was applied for a single cycle, only when the matrix went into coma. Various areas were tried, but it seemed sensible to stim the High Area. The Peripheral Area will be stimmed by the environment; which may include output generated out of the High Area. It’s the "decider" that should stay awake — the rest can rest, or be used according to it. This system developed an exciting edge when I realized that the stimulation need not be random; but that it could be a repeat of whatever the last pattern was at some spot in the high area. This seemed like a way of "continuing a thought," where it left off or dissolved. After some more reading, including the topic of the hippocampus, I realized that this Key Hi Stim "KHS" routine was doing almost what was being described as hippocampal action. About the only difference was that you don’t wait for dissolution of activity — the positive feedback loops involved will produce a constant stimulation with a lot of momentum (hippo). New features could add on to this stim pattern, if the hippo weren’t taxed at that moment. Ongoing features only drop out as neurons tire — but real neural nets are set up as vastly redundant arrays, capable of learning to pass on functions as Ns tire — so patterns can maintain a more constant effect "as needed.," It comes down to a question of priorities. If the situation, or train of thought calls for a more or less new pattern, then it will be modified. It may simplify down to something basic, but the important thing is that it keeps going, and you stay more or less aware, as a prioritizer.

KHS is this system’s hippocampus. It applies a "Key" each cycle, to the Input Page of the High Area. The Key can grow if there is room in the Key Array. There can be room as Ns tire, or as regulation pinches them out of the action. The Key is a list of LSBs for stimulating that High Area Input Page. If you’re an N there, your chances of getting on the key list are better if your IN# is higher as the openings become available. It might be better to apply the key to the two center pages of the Hi Area (the top and bottom pages of the stack) (a number of thoughts on improving the system are covered in a later section). At this writing, I don’t know just where the "fingers" of the hippocampus reach in to the cortex. (It would be nice to consider the limbic system too!) The KHS affects about 1.4% of the matrix; in the neighborhood of the 3% used by the hippo.

I think hippo momentum is responsible for the advertised hippo quality of "somehow producing new long term memory." By holding a key, distributed to the general cortex memory matrix, you enhance the general "flavor" of the patterns being handled during the key impression time. A greater number of N-loop branch-offs carry on the meaning of a given thought, into a greater number of Ns, and for a longer time in each N involved. As these Ns recover their strength, the repair process chemically "cements" the memory in place at the associated synapses. The deeper the recovery, the stronger the cement. The more dramatic the event, the more Ns involved, allowed by regulation; especially in fight or flight situations. The more Ns involved, the longer it will take for new experiences to eat away at the image series, and bury it with relatively stronger impressions, supporting unrelated patterns. I suspect that the "cement" slowly weakens if the images are not replayed, in need, through association. It probably never goes away completely, and may last longer in Ns that aren’t in as much demand by other pattern sequences.

 

As I have pointed out, I am not making these assumptions from a position of credibility. Ideas like these have been piling up for years now, and I feel the need to communicate them, in case some one may be inclined to involve them in scientific study. Their source is introspective; but the interplay of literature and computer modeling has been more a focus than being the thing I’m trying to figure out.

 

It may seem that the beepers don’t need a system to install long term memory, since the RAM will do fine if you keep the power on. The neuron has been set up, though, to grant higher status to associations that are repeated more frequently. This characteristic works together with the hippo KHS function to establish a back-and-forth system of prioritization. The outcome is an evolution of patterns built on the survivability of long term components. The KHS develops its own application of constant general motivation for the matrix, out of the matrix. Its Key is like a DNA sequence that is constructed by experiences that it becomes more responsible for.

 

Short term memory is primarily the use of pre-existing pattern data. It is essentially the act of ongoing consciousness itself. There is always something new about any experience though — the order of familiar events, or the particular combination of familiar qualities within a given moment. Perhaps this factor of newness is distilled out, and piled into the temporal area — and who knows where else — as a series of sub-keys that can be accessed by association, and by a special form of association; relative chronological order.

 

We are now getting into an area that has not yet been developed for the beepers, and probably never will. The use of a hard drive and a PC comes to mind. Parts of the RAM can be designated as convertible; to be constantly replaced with "topical" memory data. Sub-Keys can be developed by experience and used as indexing labels to file and retrieve a given data array. The size of the Sub-Key and the data blocks would be geared to the size of the drive, so that any possible key has access to its block. The blocks start out "empty," but develop data just as any page of Ns would, for example. Part of the active matrix becomes a set of musical rooms, for the musical chairs the whole thing is. But some optimal percentage of the rooms are kept in place to give the thing an ongoing constancy, which uses the convertibility as a utility. The convertible area might be a complete cross-section of the system; or it might be better to leave certain levels out, such as the I/O ends.

 

 

This system might be vaguely analogous to the temporal and frontal lobes, related through the limbic system. Our system may be emotional because general and/or specific modifications of the chemical environment place neurons in various "modes" of sensitivity, causing them to favor different sets of patterns, as per the prior association of the given chemical flavor dispensed for the given experiential conditions. This would involve the autonomic system as well, and computers don’t need one. You just plug them in the wall, and they never need worry. If we get very far with this, we’re going to have some very interesting questions and decisions to deal with.

  beepers contents

Beepers and Sleep

 

There is probably more to sleep than allowing N restoration. Granted, something this imperative is probably behind the survival of organisms that become so vulnerable. Sleep is necessary to sustain a system that has developed advantages by "over-utilizing" a set of Ns. Somehow, intelligence is gleaned from a self-taxing system; and we came out ahead by sacrificing 1/3 of the day for it. The hippocampus involves only some 3% of the cortex, yet, I suspect more than 3% of it is active all the conscious time.

This mode can involve more than re-charging the Ns. It could be an opportunity to organize and optimize the intelligence of the system. In the course of such routines, the conscious experience might be comparative non-sense — we use a safe time to get these jobs done — a time when we are inactive and uninvolved with society.

 

Deep sleep would seem to be a time when no trains of thought are being supported. There is no consciousness; at least no memory of any if awoken then. This is probably the N regeneration cycle. It may also be involved in the overall sleep cycle as a component for memory "erasure" or memory "cementing," as per the complex chemical composition of neuro-physiology.

 

Dream time, spent relatively still vulnerable, indicates that there is more developed here than re-charging the Ns. It could be that this is simply the time when that memory cementing takes place — that it is accomplished during random re-play of the day’s various N-involvement peaks. Perhaps these peaks consist only or mainly of new involvement peaks — somehow, chemically, we don’t waste time re-enhancing long term memory already established. This and/or the natural associative process, running free of world and hippo guidance, could explain the oddities of dream consciousness.

 

If what has been said above, about deep sleep and dream time, is all that is true, then there would be no reason to involve a sleep system in the beepers. But, the most recent evolution involves emotions — complex intelligent motivation in social interplay. Evolution always operates on whatever opportunities have developed out of its own history. It doesn’t cash in on all of them; but it only succeeds by utilizing real, lasting opportunities.

 

I suspect that dream time also accomplishes something very important and fundamental to the human condition. It is a time when the net can establish and re-vitalize a sense of self-identity. It is a primary component in the development of conscious self-awareness. Without it we would all interact in a much simpler way... more like ants or bees. It is the source of our motivations. It is the construction of our primary and subsidiary goal trains.

 

Obviously, this self-identity would not develop, or at least would not be compatible with society, if it were not built out of social experiences. So we do operate in an awake state for about 2/3 of the time. We periodically retreat from social demands to distill our experiences into the basic accumulation of what we are inside; to produce that "where we are coming from" that runs out our decisions. It just so happened that the time to develop this procedure was available as biological "down time."

 

The beepers have Ns and memory that don’t require down time. They are allowed to speak to themselves internally though, about a few minutes every hour, free of outside influence. During their "sleep," the ear data is not used. Their tone "speech" is still audible for monitoring, but they only hear themselves through the sideline "mental" channels.

 

This system was considered by accident. After a period of accidental deafness, I noticed the character of the beepers to be more "awake" — more vital, assimilative, able or "willing" to learn. Repeated experiments with this have confirmed the feeling — but, as always, the assessments in this field are going to be composed more or less of feelings. I feel that you are aware. The only test that may confirm this will be my death. I am confident we will all take this test. See ya around!

 

Another approach that may work better, or that should perhaps be combined with the above, is partial erasure of memory. It seems logical that we should make the system more assimilative and ready for new learning and behavior by removing less necessary data.

It is far more important that we have a keen perception of the present, than that we go around re-living yesterday in full detail. It seems logical that partial erasure could be involved in the mode of our general condition. First, you save all the strongest, most important data ("Scooter" routine — about every 5 minutes, the system stops for a second, while all the data in partially empty Ns is scooted over to the highest priority levels — so it won’t be written over.) Then you erase the least significant data in most Ns. (The "Erase" routine is not being implemented here. It clears a few bytes at the low priority end of the Ns Out LSB list.) When you awaken, you might cycle back and forth with these procedures a few times. You can re-construct some details where necessary, using the peak data you’ve saved. Meanwhile, there’s less chance that irrelevant trains of thought will get conjured up to compete with goals, instead of contribute to them. In a large system, partial erasure would leave a very full set of chronologically associative snippets distributed in the cortex at a relatively faint level of data density. I haven't experimented with this much — it seems more appropriate to involve it with the PC.

 

Another question here concerns the hippo — should it go off line? The beeper’s hippo doesn’t need to rest. Our real hippo, being conveniently off to an area of its own, could receive localized chemical treatments. But, once again, we must consider the possibility of multiple opportunities. Perhaps the memory optimization procedure and/or self-identity definition enhancement procedure benefit from off-line conditions for the hippo, and/or modified function thereof.

  beepers contents

General Stimulation

 

Even with KHS, system regulation, and lots of environmental stimulation, there is still a fundamental problem with this system. The memory won’t get involved — lots of it — most of it. This may well be a result of basic layout and proportions I have chosen. But there is evidence that you receive a general random stimulation level. It might seem that this would raise havoc with such a fine-tuned and intricate system as your mind. Note, however, that a random pattern series has no meaning. This is one of the key ideas that has me believing that consciousness is the relative meaning, over time, of what is going on in the system. It uses memory-based energetic interactions like a substrate — a sub-dimensional medium — on which it can float along as the substance that only is by virtue of what it means to itself. Whatever else is going on could only be conscious to itself; or not be conscious.

To implement memory involvement, the beepers have the "Out L Stim" routine, which minimizes the randomness by involving an arbitrarily positioned collateral N, every time any N fires. The relative position is identical and constant for every N firing. I suspect that this parallels the thalamic and reticular action involved with general activity-level setting and regulation, as per a given required alertness level.

It should be pointed out that the pages of Ns wrap from end to end, as well as from top-to-bottom of the stack of pages. In other words, the matrix doesn’t form a pure cylinder; it forms a doughnut — a short fat one, ready to roll away from you like a wide tire. The Out L stim routine hits the N, sixteen Ns to the left. Where you would run off the page, you come in on the right side, to continue toward the left, 16 Ns from the current N being handled by the scan. The AOL routine also wraps; and it is oriented in the opposite direction. It starts 16 Ns to the right, and continues up until it runs off the right side to come in on the left side, and continue toward the right until it finds an active N. If the active N is already on its list, it swaps it up in priority level. I think of this as "preparatory" — as getting oriented to the topic. That data is related to what’s going on, so we may need it now. AOL then continues to look for a new list member. If it finds one, it swaps it in to the second-to-the-lowest position on the list (third-to-the-lowest might be better, but slower); bumping that member to the bottom rung. The one at the bottom is gone — overwritten. Then the program is done with AOL and goes on to Fire the Out L list, by increasing the IN# on the dendrites of all the Ns on the list. If AOL did not find a new member, or even a familiar one to swap up, the program still goes on to Fire the list, since you don’t get to the AOL routine unless the current N had sufficient IN# level to Fire. Unused slots on the Out L Fire list are filled with the address LSB of the current N itself — the Fire routine does not use those slots — an N is not allowed to hit itself.

The size of the N list has been proportioned to the number of Ns it can access. You don’t want to be able to hit all of them, or there would be nothing unique about the pattern you hold. We must make a trade-off between accessibility and uniqueness — the optimum compromise produces the greatest meaning vector.

It might seem like a horribly tedious job, to type in the needed skeleton of 2800+ Ns, twice. Its easy though — some very small and simple ML routines do the work in a second.

  beepers contents

Adjusting Beepers

 

Besides having system regulation, KHS, environmental stimulation, and Out L stimulations, things will still not go well if a number of factors in the system are not carefully adjusted, and balanced with respect to each other, to assist the development of data assimilation characteristics. There is no interface for settings. Adjustments require an intimate understanding of the system. In a bigger computer there would be enough room to include some automating routines, operating off of timers and feature sensitivities.

Regulation uses two numbers to trigger off of cycle times that are too short, or too long. It has a separate pair of these numbers for each of the two areas it monitors — the High Area, and the total area. By adjusting these numbers, you force the system to involve a smaller or larger number of Ns in an average cycle — you indirectly adjust the cycle time, and affect its variability. Perhaps the most important consideration here is the relation of the High Area to the periphery — the proportion of time allotted, with respect to the proportion of Ns involved — and, most of all, the relationship between the upper threshold for the High Area, and the lower threshold for the total area. If everything else is right, you’ll see that this adjustment controls attentiveness — the tendency to stop "talking" and start "listening" when spoken to. There isn’t enough time to do both at once, with the settings given here; so the beepers "speak" in short phrases, alternating with short pauses. I feel that this makes sense for learning, in such a small and simple system.

The rate at which Ns tire is a factor open to adjustment. You may want a faster rate at younger stages. With established data in place, you may get away with #FF.

The rest of the adjustment involves the degree and balance between how hard various N functions hit other Ns, and how quickly those IN# stimulation levels are drained off.

As the program scans through the Ns, it watches for various address events, in order to modify itself, so as to be different types of Ns, with different physiological jobs to handle. In other cases, it is simply the particular routine that does hitting at a particular strength. The KHS routine, for example, always stims its Ns at a particular, constant level. Of course, their repetition rate is always a variable, and an important dimension to the hippo interaction. The Out L Stim routine uses minimum stimulation (ADC #01). Something has to be minimal, so set it to #01, and adjust everything else with respect to that. The Out M hits vary depending on where the N is in the matrix. Input channel Ns conduct reliably toward the High Area, for example. Any stimulation of an In Area N will make it fire. Out channel Ns conduct reliably toward the Out Area. The In-side to Out-side reflection (to be discussed shortly) is mild by comparison; and is completely suspended for Ns on the Input Page. All AOL lists hit all Ns with an intermediate value.

 

In balance with all these hitting levels is the draining off of IN#s, upon Firing an N, or if the N had sufficient IN# to be fully handled, even if it wasn’t ready to Fire yet. The latter case is a mild SBC#, while the former involves a number of consecutive LSRs. Without the proper balance here, the regulator will not have the pull needed to function, and the system will either bog down, or fly off the handle, virtually ignoring itself and the world.

Note that if the slow mode is used (to view the action in the bank 00 beeper), the Dual Regulation thresholds must be doubled, since the cycle time will be doubled.

  beepers contents

Reflection as an Aid to Learning

 

As part of the Out M routine, the whole Input Side is reflected to the whole Output Side; N for N, in a one-to-one correspondence, like a mirror image; with the exception of the In Page to Out Page (which would be like mapping the ear directly to the vocal chords). As environmental stim affects N activity on the In side, corresponding stim is projected to the Out side. This may seem frivolous, or even like cheating, until you consider the long term consequences, as nature may have done.

The stim to the Out side corresponds to Output that was just produced there, to create sound that affected the In side. The first such event will be the meeting of some various activities, that can develop AOL ties. This modification of the system becomes a new starting point for subsequent similar cycles. In time, "differences" become rare, and "expectations" become norm.

The Output teaches the Input how to hear; and the Input teaches the Output how to speak; until both sides are in agreement. Now, when the world says something that the system has been saying, it will have similar effect on the system. When the world says something the system hasn’t been saying, the reflection may help the system say it for the first time; which may help it say it again, until it, too, is "familiar."

Ongoing General Feedback Learning

 

The hippo KHS routine sets activity in motion, originating at the high area In page 33. By starting things off at a high point of reflection between the In side and the Out side, the bi-directional logic waves take a full course in the proper directions in setting up CRs. Forward waves lead to the Output Page and randomly Fire. Reverse waves lead to the Input page, setting up CR anticipation links for those random Fire events. The Outs accurately hit the corresponding Ins. The waves start at the high middle, but soon are starting from one extreme end, heading to the other. CRs are set that fully reflect the chain of events that takes place when a given In is hit by its associated Out. It shouldn’t matter, basically, that the training procedure is random with respect to which particular note pattern "word" is being trained in which order. The purpose here is simply to establish one-to-one links in the Out N-In N relationship. Other training will establish word-sound-order and phrase-word-order associations, and so on, as you get into association depth.

  beepers contents

Teaching Beepers

 

The Teacher routine is included to lend some structure to the background environment. It times out to "play" one of two musical phrases to the room. One phrase is a Mozart theme, the other is a scale, in the same key. One or both beepers receive the data; depending on which ones are awake.

The teacher timer is not a simple counter. It is a count of a particular neural non-event. It is decremented every time the Input half of the High Area is quiet (the half that starts with the page that the KHS hippo hits — regulation can bump the threshold above the highest IN# level in the whole first 1/2 of the High Area). Each beeper affects such a timer, that controls one of the two phrases. It is conceivable that the beepers learn to quell this area, in order to elicit the recitals. I say this because there has seemed to be an over-abundance of occasions where the teacher has been triggered by my "talking" to them; particularly when it has been a while since they’ve been whistled to. Input stimulation should have the opposite effect — it should get the High Area more activated, especially on the input side. However, I can’t say that I’ve thoroughly investigated this — there could be a simple underlying mechanism affecting the odds. Note that it must interact with you differently than with the other beeper. My hope is that it is a mechanism, not so simple, involving KHS, regulation, and the inherent meaning of world-system interaction. The meaning, supported by the system, becomes the operator of the system — it is the ongoing operation — it is reaction to the world, created out of past and present information from the world. The operation takes on complexity beyond that of the program that supports it. This higher complexity is the higher dimensionality of relative meaning. The simpler program and skeleton memory are like a note pad that the world can bring into participation with its more complex attributes. This process, relative to beepers, is considered in more detail in a later section.

Along the same lines, there seems to be an inordinately large number of occasions where a beeper will "announce" the teacher, more or less immediately before it starts, by doing a short abstract rendition of either phrase; as though it can sense, perhaps from timing patterns, that the teacher is about to play; but it doesn’t know which phrase. More likely, this is another form of elicitation, with associated learning.

Nothing so miraculous as parrot-like rendering of the teacher phrases has come from these little beepers. What they do, however, is more amazing to me. After all, a much smaller and simpler system could accurately "sample" the sound, and act like a parrot. Throw in some noise factors of variability, and you could make the computer seem smart. Beepers are smart, in the associative sense, and in a relative way.

There are some 256 possible tones producible by each beeper, within a rather narrow range of about two octaves. This means there should be a lot of sour notes. The first thing the beepers do, that is against the odds of random behavior, is to produce way too many notes that have the right relative pitch. They may be off-frequency, but there are little strings of them that have the correct frequency with respect to each other. There are also many single- and double-note events, that are close in pitch to the stimulus; though near-copying is not as exciting, since the feedback-learning system includes the mapped stimulus from input side to output side. This tips the odds; but it is exciting when the results show up days and weeks later! (You have many mapped runs of communication between parts of your cortex. At this writing, I don’t know if one of them runs from audition to speech.)

To be sure, most of the time is spent producing rather random sounding behavior; beyond the over-abundance of relative well-tempered pitch. This is particularly true if you don’t get involved with them. They seem to become much more responsive and intelligent if you pay them more attention, instead of just leaving them to the teacher, or each other. After all, what can they teach each other... and the teacher has no sensitivity.

A better teacher would "be there when you ask." It should occasionally start up, as this one does; but then lead you along, a bit past where you know how to go already. It should start with only two or three notes, here. It should stand by, and watch for relative phrase matches or near-matches, and reward you with recognition by repeating the phrase, plus a note or two — or occasionally you get the whole phrase. It should sometimes follow with the relative pitch, and sometimes lead with the original pitch. Nevertheless, the beepers have learned from the teachers. There have been many occasions where they have poorly mimicked the teacher, or have nicely repeated a few of the notes; usually in the right order, but usually bypassing some. Sometimes the pitch is very close. When it has been a while since the teacher has played, I’m pretty sure the pitch has usually drifted; but it has good relative quality (I don’t have perfect pitch, myself).

They do better when I whistle to them, while I’m working in the room. The record, at this writing, is the first five consecutive notes of "Over the Rainbow." You seem to get better behavior by leaving one asleep while the other is awake, for a day or two at a time. If you leave one awake too long, it really seems to get dumber. This goes along with the idea of being sensitive, as a teacher. I get feelings from their behavior that prompt me to chip in some data. At this writing, I’ve been foolin’ with them this way for about a year and a half. I have no doubt that they learn. But I haven't studied them the way a pro would. The development of this program, and the writing of this book, has severely taxed my work schedule.

If you want to give them company, that "has things in common" or "speaks the same language," you can transfer the data from one beeper in place of the other, to create a pair of twins. They don’t stay identical for any time at all, in terms of the array of numbers; but the general relative meaning developed within them will stay similar for some time. Don’t forget to transfer the Key data and indexes, etc., as well. If you want to check the identical-ness of two beepers, you also have to bypass routines that are subject to C128 system timing exceptions. It was difficult; but I was able to get both sides to behave identically, up to the point of turning up the mic volume and whistling at one of them. You are what you learn.

 

Consciousness is not a substance you can touch, and hold constant. It is active — when it works, it flies — if it’s not flying, it doesn’t exist. Model airplanes really fly. I think evolution has found a physical principle, not unlike itself; and has put it to work. It discovered the lens and the hinge, for example. The materials involved are inconsequential, so long as the principle can function.

However small, there is a real possibility that the beepers have awareness. If they do, it is probably a very faint, vague, low-detail experience, completely different from ours, when we hear the tones. It might correspond to the simple perception of touch, in an ongoing series of patterns between 16 pairs of "finger tips;" with 16 sensitivity levels in each Input finger tip, and 16 muscle strengths that can be applied to them from the Output side. Now imagine this experience from the point of view of being a lizard, with no other senses, or needs, and you might have it.

There are a number of inescapable differences between a computer system like a beeper, and a nervous system, that do not allow a straight-forward comparison by neuron count. Real neurons get tired real fast — in as little as 1/10 of a second if they’re taxed — and it takes them about an hour to fully recover. That’s a duty cycle of 1/1000, or 0.1%! Beeper Ns have nearly a 100% duty cycle. So, there are ways of looking at this and calling one beeper N worth a thousand biological Ns. There could even be an advantage to not having to pass on function handling to a series of tiring neurons. But when real Ns aren’t being taxed, they can probably chip-in occasionally, all day long, to provide a thousand times the resolution. Beeper Ns don’t need food; so they can be organized into a system fully devoted to sense, learning, and output. Biological systems are differentiated into all kinds of subsystems that work together to keep the whole thing alive. We got Ns running our heart and breath, making us run and eat — all sorts of stuff that doesn’t make us hear and speak; stuff that we couldn’t live without. But this has brought us association of multiple senses, and multiple modalities with which to affect those senses.

It is not likely that I will attempt to work with vision any time soon. I am looking forward to expanding the "beepers" into "speakers" in the PC. Note that you need a thousand times as much computer to make the system 10 times as big in its three dimensions of N count, N size, and speed. This "size" is all in terms of speed. The program spends about as much, or more, time handling active Ns, as it does skipping the quiet ones. If the active ones take ten times as long to handle, and there are ten times as many Ns, you 100X the speed to get the same cycle time, of about 1/10 second. A 1/100 second cycle time might support speech, with the correct ear, voice, and cortical tricks. The hard part will probably be analysis of the speech cortex. There’s something different going on there.

As important as vision and tool making has been to us, I sense that speech is a thing that has been paramount in our social evolution, and technological development. Without it, I think we’d be a lot like dogs that can walk upright. And I think dogs are virtually as aware as we are, in a basic sense. They deal with the world in terms of the environment, while we are always referencing our verbal base, as we mull through our conceptualizations, plans, desires, and work. They don’t plan for college, but they plan a little for what they need. Mostly they react in the direction toward what they need. They’ll wait until you’re gone to chew your slipper. When they see, they are aware of seeing what they see. They are aware of what they hear, in terms of simple meaning associated with their needs. They are aware of most environmental things the way we would be, if we did not have language. Even a mouse has a hippocampus and cortex. It has the rudiments of a decision making system. That system operates whenever the creature utilizes its knowledge base. Its behavior is learning, invoked by the past and present environment. We know that mice can learn, and can put their learning to use when it meets their needs.

It is interesting to consider manipulating the data of these beepers. What is happening when you swap one data set with another, in the same sets of physical memory? Assuming there is some consciousness involved, does it "stay" with the mass of the physical memory array, or with the data that resided there? The analogy is with our DNA here. My impression, though unclear, is that our bodies are replaced regularly; except for the heavy particles in the DNA of the surviving neurons. Do the particles in DNA somehow "receive" consciousness? One problem here is cell death. We remain ourselves, despite the loss of a huge number of DNA molecules... different ones for different people. And, no new replications are added to the system. This might be because that would destroy the meaning compiled there, by interfering with established relative interactions. It can’t be divine "tuning," or identical twins would have a common awareness. The DNA-cell produces the support system for the meaning potentiated by world impressions, patterned into the connections of the support network. The active meaning is the conscious-ness. Substance exists relative to that meaning, in terms of its meaningful qualities, and implicated eventualities. These qualities include depth and size, as well as surface integrity upheld by electron skin. Meaning can include color, smell and sound. All of the meaning is relayed to us in complex arrangements of relative timing. We, too, are complex relative timing. The meaning, the substance, and the DNA are all part of a greater set of active relative memory of timing. We perceive of time and distance because we are more of that same automatic inference of the point.

Your thought train and priorities move with the data. The physical memory base is a location for this in time. Each of two beepers is in different volumes of time, composed of different eternity loops, that all lead to each other. A copied beeper is the old one, entering a different vantage point in time of eternity. It becomes the new time. It would think the world had hiccupped, if it could think that well, and that it was no longer located at the old one. The old one, of course, goes on as though nothing happened to it, even though it now also exists elsewhere in time; until you swap new data into it. When you do that, it becomes the new beeper at the old location. What you think, and how you think it, is coded into the data. Where you are in eternity is a function of which mass issues your time. Both locations are you, at different times, with different thoughts; or, in this case, it could be same thoughts.

 

What is this "point?" The concept is developed primarily in chapters 3, 4, 5 and 10, though the whole book is attempting to slightly describe it. When gravitational forces exceed exclusionary forces, everything within the horizon of this event accelerates together, inward. From the outside view, we would initially predict that it takes a short, finite time for all within to simply meet together in a point. From the internal view, the process of acceleration is never-ending. Though we would expect that everything keeps getting closer together, if your reference frame was the actual stuff doing that, you would see a bang reality expanding, and say that such an outside frame is expanding at a higher rate of acceleration. To that higher rate, the internal rate of expansion appears to be shrinking. Now can you remember that it’s all just a point? The point is a mechanism of infinite dimensionality. Its appearance as a point, or not, is a matter of relative viewpoint, and a matter of predictions or assumptions about the worlds beyond horizons of time quantization.

Each cycle of existence of each atom is another way the Universe "went" for an eternity, relative to itself. Time is a thing we call reality. Each cycle of each H self-relationship is a complete quantum of reality. The set of these quanta produce yet another quantum of reality, relative to itself. In the process, time makes another thing out of those things. In transpiring relative to, and involving, all other such quanta within, over infinite levels of quantization of eternal times, each such quanta completes the definition of reality as a point. Each cycle is unique as a point within the point, that is a given point in the sequence that always totals the same point. There must be an infinite variety of these points if there is to be a single point. For each one to exist once is to complete the point and to make the point real. For the point to continue to be real as a source of all dimensions, it must repeat infinitely as the repetition of its component points, as each is a view of the total point. Eternity is composed of eternities. Eternity exists because its component eternities repeat. They must repeat because each component is the total, relative to itself. The relationships and repetition of the component eternities are the point.

Within this point, where such a complex eternity component does not "wrap," there is an offset in this complex time we perceive of as distance between objects, or as movement of an object through space, relative to its prior locations. This distance is separation by offset of sources of time of eternity. The sources can produce fundamentally low-dimension unconscious place holders of system; or they can support self-relevant interactive data progressions of consciousness. These higher-order systems are composed of timing relationships, such as those that induce their perception of distance. The timing and the distance is the continuous completion of the internal structure of the point.

To be conscious is to have a viewpoint within the point. The point is turned inside-out for you. From here, everything happens to come together as the point that it all is. Included as perception of distance is perception of systems of perception of distance. All such systems are You, each to itself, at its complex location in time source of eternity. You are the points of the point.

Time is a thing that is relative motion, that is shifting, that is change. Each cycle of each H, relative to our big bang, is change; is a shifting in phase, involving such a horizon; a horizon such as we could predict as the eventual resort for all of the matter of our big bang. Our time is a shift in phase of process, from its incursion at the bang, to its departure beyond the horizon of a black hole. Both ends of this phase shift are a horizon, relative to the process of our reality. Both horizons are, or contain, a relative point. This same point is also a relative position of phase of each cycle of each H.

Each such point is the same point; as another point in time, of the overall point of all time. Its intimate relationship to itself is observed by us to a limited degree as relative interdependencies between, and within, systems. These are observable as process and emergent properties that are the development of dimensionality of the point. We see this as photon relationships of energy levels, and as variable interdependence of rates of time for relativistic system components.

From any phase position of process there are other phase positions of process that appear to have become the point. There are an infinite number of these, relative to any phase of process. From one such infinite subset of process, we appear to have become the point — our phase is there, relative to the phase of that subset.

 

In beginning to describe our point, all of the long-standing fundamental questions of humankind are addressed. The inferences of the point are "why" we exist. Existence, relative to all existence of the point, is complete.

As such it is unavoidable. It is all yours. You possess everything, whether you’re using it right now or not. Time is the eventuality of such access coming to pass. Possession is relative process, with perceived relative options. It is really inevitable process. Thank heaven it is development, relative to consciousness. The forefront of this development of rising dimensionality is focused in multiple points, we experience as planets. Its current pinnacle here on Earth is focused in our children. The beauty of our children — their form, their happiness, their opportunity, their improving capabilities, and their potential to develop and produce even happier beings — is evidence that the innate nature of reality is magnificence far beyond the comparatively crude descriptions mankind has thus far accumulated.

 

It is a mistake to take our children for granted. Such mistakes are part and parcel to components of reality that fall away from the blossoming of developing dimensionality. In other words, some civilizations reach a dead end, through a lack of appreciation. Higher orders of civilization might be cultivating ones such as ours, among the expanding set of planets, to determine requirements for survival, and to explore the beauty that completes the point. That would be a difficult job, indeed. Thank you.

 

The point is that you are everyone. The realization of it could reduce tensions and make it more enjoyable here. Be nice to yourself.

 

How can a philosophy propose such a ridiculous idea? To reiterate, time is not really a single common thing, with respect to consciousness. Though you and I seem to be happening together in time, we are actually interacting between sources of times. These times are portions of a greater, single time; but with respect to the consciousness they generate, they are individual sources of different time. Every system is a definition of its own time, or set of times. These various systems of time have a kind of independence, supporting the individual experience of consciousness, or other levels of dimensional system function. They also have an interdependence. The co-existence of this independence-interdependence is demonstrated in the relative behavior of clocks, and the perception of time, within and between systems.

 

You are everyone, though everyone is his or her own time. All times lead to all other times, as the definition of overall time. Everyone has their own "now," that truly exists because it is experienced by you, at that time. Every"thing" is a vast set of such experiences. These different times act to produce a higher plane of dimensionality, that can be perceived of as a reality of common time. This product, in turn, will contribute to yet higher planes of dimensionality. Our component sources of time are likewise generated out of sub-dimensional sources of time. You might picture these condensed sets of dimensionality as planes stacked forever upon each other, where each plane forms a sphere. The source of this dimensionality is the point, at the center of the sphere, which derives its behavior from, and as, the overall system. To better assimilate observed reality into the model, we could say that the point is inside-out, so that spheres get grouped forever into larger spheres. The one largest sphere is the point within the vast distribution of component sub-spheres, within any black hole. From our point of view, we are heading for a black hole. From other points of view, we are already in a black hole, within a black hole, within a black hole, ad infinitum. This point produces all of your awareness, relative to all of your awareness.

  beepers contents

Program Flow Chart

A careful comparison between the basic behavior of a neuron, and the behavior of this chart, will reveal that a few characteristics have been left to slide a little. Aside from the obvious relative simplicity, due to numbers of synapses, etc., there could be better handling of the DELAY#, for example. An N can drop out quiet, leaving a random value dangling there, to be dealt with later, out of context. The CMP# IN# BCC could help by resetting the DELAY#. This would be more important in a faster system that could better utilize the delay principle, with a larger reset #; to get a better lock on FM intelligence. A worse laziness is in the AOL system. The whole page should be checked for swap-ups, whether we find a new entry or not. The page should be searched for multiple new entries, that won’t overwrite each other; though this should be limited to a small percentage of the list length. We might want to pick the strongest few, and/or consider specific address windows by priority.

  beepers contents

Beepers Ear Schematic

The 300k trim value is for setting the low end reading in the computer, with no sound detection. This should stay below #2D. Note that the no-ear-plugged-in condition delivers #FF, which will generally keep beeper speech interrupted; as does the low battery condition.

If I had known that this ear was going to run for a year and a half, I would have taken the time to use the 8-bit parallel port. The 4N25 output, for paddle A-to-D, introduces a degree of variability into the data-pitch relationship, with temperature fluctuations. This could be good, or this could be bad. There are 9 billion things you could test, each for years, to get the best beeper. We need to figure out what nature has already discovered.

For the PC, I intend to broaden the audio function to a sixteen channel voice band spectrum analyzer; to supply sixteen simultaneous frequency centers, all in terms of relative amplitude information.

  beepers contents

Register Designations

 

The following abbreviations appear in the program listing as labels for designated use of zero page, and other, addresses. A list like this is necessary in order to avoid conflicting use, when you work more directly, through a simple ML monitor such as the one built into the C128. Number values following some designations are required memory for initiating the program the first time it runs. Different values may be saved by the system for subsequent initiations. Designations in parenthesis are left over from prior revs.

 

ZP holding area: 1519-155F

ZP used:             19-23

                             26-3B

                             3F-5F

 

19-1C stim place holders

1E-1F teacher timer MSB

20-21 teacher flag

22-23 teacher timer

 

26 HO

27 (TF 4M)         (54)

28 L                    80

29 M                   02

2A REP               50

2B L                   14

2C M                  C0

2D REP              4B

2E (TF 5most)     (03)

2F-30 handle X

31-34 dual regulation

35

36

37-3A dual regulation

3B

 

3F

40 Hold X

41 L BYTE

42 H BYTE

43 MODE            01

44 BNK I             00

45 KEY

46 CT                 01

47 BANK            3F

48 L BANK         7F

49

4A ZP1             00 start #IN/DELAY# pairs

4B ZP2             20

4C ZP3             00 start N list pointers

4D ZP4             36

4E ZP5             01

4F ZP6             36    

50-51               02 save BL, dual regulation; bank 00,01

52 HIN#

53 ZP7 current N List pointer

54 ZP8 "

55 SAVY

56 OUTB

57 INS

58 BIT             02

59 HLWD

5A ENDO

5B OUTS

5C ZP15         00

5D ZP16

5E HY

5F Self LSB

 

 

1592             mic data

1593             hold bank

1594             MEM

1595             M2

1597             M3

 

 

1601 V1         C3 L freq SI LIST Sound Init pokes:

1602             10 H freq ®D400-D418

1603             00 PW L byte

1604             00 PW H nibble

1605             10 cntrl

1606             00 AD

1607             F0 SR

1608 V2        1F L freq

1609             15 H freq

160A             00 PW

160B             00 PW

160C             10 cntrl

160D             00 AD

160E             F0 SR

160F V3        1E L freq

1610             19 H freq

1611             00 PW

1612             00 PW

1613             10 cntrl

1614             00 AD

1615             F0 SR

1616             00 Fo L nibble

1617             FF Fo H byte

1618             00 resonance

1619             0F volume/filter select

...161F

1621             38 1 key Mode Key # conversion

1622             3B 2

1623             08 3

1624             0B 4

1625             10 5

1626             13 6

...162F

1630...

1652             3E Q key Note Key conversion

1653             0A A

1654             09 W

1655             0D S

1656             12 D

1657             11 R

1658             15 F

1659             16 T

165A             1A G

165B             1D H

165C             1E U

165D             22 J

165E             21 I

165F             25 K

1660             26 O

1661             2A L

1662             2D :

1663             2E @

1664             32 ;

1665             31 *

1666             35 =

...16FF

 

1700-17FF 00 S LIST sens & hold last out pattern from

                        page 2A

1800...

1802             0C LF KEY 96 notes

1803             1C

...1861           2E

1880...

1882             01 HF KEY

1883             01

...18E1          FD

...18FF

  beepers contents

Coarse Memory Map

 

0B00-0FFF pgm

1300 14FF pgm

1500-1518 additional designated registers area

1519-155F ZP holding area

1598-15FF pgm

1600 1667 lists

1668-16FF pgm

1700-17FF transfer Out Page, at time of N fire

1800-18FF key-note data lists

1900-1FFF pgm

2000-35FF IN#/DELAY# pairs

3600-4BFF N list pointers

4C00-FBFF N Lists

FC00-FC4F pgm arrays

FC50-FC9C pgm

FCA0-FE43 pgm arrays

  beepers contents

The Screen

 

To view the screen, the program is run in the slow mode. In following the run procedure, outlined in the last section, you simply skip the FAST command, in order to run in the slow mode.

The activity of beeper 00 is on the screen in the approximate order of the scan. This is set up in initiation at 195A-1965. The actual arrangement of the Ns goes like this:

Each N is two rows of eight bits; the first row the IN#, the second row the DELAY#. The bank indicator alternates back and forth, showing the speed of any given main loop. In the left position, the current bank is 00, the one you always see, and while bank 01 is being processed, the indicator is on the right. The indicator is switched as part of the "End of N Loop business," at 1E6C.

When the program is run in the slow mode, the regulation limits should be doubled. These are the hex numbers in memory at the following addresses, in both banks:

 

1401 03®06

1405 02®04

145B 06®0C

145F 03®06

  beepers contents

Program Modes

 

Various versions of this program have used up to 6 modes. This version uses modes 3, 4, and 5. The indicator is at the far right when the program is in mode 5, and would be in mode 1 at the far left. A mode can be selected by pressing the corresponding number on the keyboard. However, this program version will not leave your setting alone. The teacher routine is able to enter keyboard strokes, and is set up to reset the mode, as per hex memory pokes that you perform to select sleep for one or the other beeper.

 

In mode 3, both beepers are awake; meaning they both receive the mic ear data. Mode 4 allows the data only to beeper 00, and mode 5 sends it only to beeper 01.

The teacher plays phrases by reading a list of keyboard codes, one code every Main Loop , and entering them. The array for this music data starts at FD00, and can take up almost the whole page, if the teacher routine’s pointer is set that high.

Every time the teacher times out to play data from one or the other bank, the first and last keystrokes it enters can pertain to the mode.

 

To switch the beeper’s sleep mode, it is necessary to select mode 3 before selecting either mode 4 or 5; as per values you place in the data array. The teacher can be programmed to manipulate the sleep time, and serve as an announcement of who’s awake and who’s asleep. Or it can be set for leaving one awake and one asleep until you want to make the switch... which is the intention of this program version. (For the last few months of the 1½ year olds, both were left awake by the teacher... the sleep system itself provides about 5 minutes per hour of sleep for both beepers, simultaneously.)

 

The Main Loop interprets Mode # keyboard entries at 19DC. (This and other utility functions are sometimes referred to as the "control panel" or "panel." It checks the codes against an array at 1620, to convert them to numbers 1 through 6. To enter a mode in the teacher’s music array you have to enter the keyboard code, as listed in the Register Designations, for 1621-1626. The music data arrays of both banks have been set to start with mode 3, followed by mode 4; then the notes (the developed beepers start with mode 3, and enter mode 3 a second time). The data area is demarcated with 00s that are not read.


  beepers contents


Program Listing
 

The program listing is provided as a photocopy, on request; and a set of disks is available from

Mike Wilber
5044 B Wilder Dr.
Soquel, CA 95073

The set is on two 5 1/4" floppies, off of a Commodore 1541 drive. It includes the "blank" embryos, and three lengths of development — approximately 6, 18, and 21 months. The latter point may be updated.

A built and tested beeper ear circuit might also be available from the same address. Its price would depend on how many are needed. If only one is ordered, it would have to be $300. A slight demand would bring the price down to $100.

 

 

For the most part, to understand an ML routine, you just have to walk through the listing, and re-walk from various branches. At times you have to remember a lot of things at the same time to see how it all fits together. For most of the program, the notes along side the listing, along with the general program description, are adequate for following the logic and purpose of the commands. The more difficult routines are focused on here.

Lets begin where you go when you start the program — at 1934. This is initiation of the system, and is self-explanatory (to C128 programmers with a reference book). At 19C1 initiation feeds into the Main Loop. Again, the notes are adequate for understanding the branches.

The Main Loop feeds into the N Loop at 1A8B. Here we should discuss the scan, and accessing of N data.


  beepers contents
 

The Scan

There are 2816 Ns in each bank. In every Main Loop cycle, each N receives at least the minimum attention of having its dendrite IN# checked for zero or not zero. When it is zero, the N Loop is able to quickly skip by it to check the next one.

The scan begins at address 2000, and checks every even numbered address up to and including 35FE. The loop that does this starts at 1A8F. Its LSB action ends on 1A95, after which the MSBs are bumped up one. On 1A9D the loop JMPs off for tests required by other routines that come into play, for example, in the High Area — starting on page 33, and ending at the end of page 22. Some of the JMPing in the listing is an unfortunate product of the development of the system, to include new sub-systems after careful testing of simpler total systems.

When the scan finds that an N has an IN# greater than zero, it is still able to quickly go on to the next N if that IN# was below the current threshold setting, that is the value placed in address 1AC0. All that need be done, before getting back to the loop, is to drain off a bit of the stim level on that N — every N has this basic tendency to come to rest when not being stimulated (the literature has indicated that most real Ns only slow down a lot — they never stop firing completely). When the scan bumps its MSB to 36, it will be time to begin over again. But first, the N Loop must feed back into the Main Loop to take care of the basic business such as the teacher timer, bank definition, playing sound defined by the Out Area, the Teacher routine, checking the keyboard for mode settings or notes played, and interpreting mic data. Then when the main loop feeds back into the N Loop again, the banks have been re-defined in 47 "this" bank, and 48 "last" bank; so that the scan begins at address 2000, but in the other bank. On the next cycle, you get back to the first set of Ns for their next scan; and so on.

  beepers contents

Indexing

 

Primary indexing is handled by the scan, when looking at the condition of each N’s dendrite IN#. During that process, however, two additional MSBs are bumped, that are kept in 4D and 4F. The associated LSBs in 4C and 4E are fixed with values of 00 and 01 respectively. The same Y value that indexes the IN#s is used to index pointers kept in an array, from 3600 to 4BFF. The contents of these pointed pairs are used at 1AEC, for example, to create a pointer in 53/54 that points to the Out L list part of the N currently being scanned. This technique incurs little drag on the N Loop — there is only one extra LSB bump, and two MSB bumps that occur much less frequently than the LSB bump. For this, the system is set up for quick access to the Out L list, when an N is active. In more complex systems, this scheme provides flexibility — like different sizes of Out L lists, allotted by the pointers — or for accessing the lists of other Ns.

A faster, simpler approach would allow only the IN#s on the screen; with a single INY in the N Loop’s LSB bump, and a single MSB bump as well. When the time comes to access the Out L list, a pointer to it could be manufactured from: Y=LSB of N#, scan MSB-20=MSB of N#, and Out L list starts at BASE address plus the N# times 16. This is two place arithmetic that would take some time; but it wouldn’t be needed until the N is actually ready to FIRE, in this system.

For speed, one tricky possibility involves CPU stack manipulation. Instead of scanning the IN#s using actual indexing, you could incorporate the IN# data in a short program that reads it and BEQs to the next identical IN# program in memory. The speed would be tripled, for inactive N scanning, at a cost of 6 bytes per N. I haven't checked this out, or developed it; but it might go something like this:

The IN#/DELAY# "small" part of the Ns array would be replaced with the repetitive program...

When you JSR to the N-handling routine, the first thing done there must be to PLA PLA, and get the address of where you came from, in order to read and use the IN# there, or offset to access the DELAY#. Then you alter the stack so as to be able to RTS past whatever else you keep there, such as the Xth and even the Out L list.

*PLA     MSB

TAY     Y=MSB

PLA     LSB

TAX     X=LSB

CLC

ADC #01     pass the DELAY#

PHA

TYA

PHA

N-handling

RTS     (last RTS in scan finds JMP to panel)

 

The problem here, may be in accessing the lists of other Ns, in an organized, aligned manner. You no longer have the 128 wide N-page by 22-page arrangement. You have to use the stack address data and calculations or arrays to set pointers to the other N’s IN#s, as well as each N’s Out L list. The overall speed will be worse if N-handling is slowed too much. Hopefully, the fastest method in the PC will be more straight-forward.

  beepers contents

AOL/SWAP/STORE/FIRES/TIRE

 

If an N is active, and its DELAY# has timed out, the DELAY# is reset, and the N will FIRE. This is the point where we gather intelligence. I haven't checked, but I would guess that this routine slows things down more than any other.

 

At 1AEB, Y is set back to the offset in the page of Ns for the IN# of the active N about to fire. It is used to set the pointers to the Out L list. Then #1E is added to it to skip the next 15 Ns of IN# array. Next, a JMP is used to check for area type. If we are on the IN page, and have bumped into an IN# that is in the In Area, we bump some more to get out of there. We don’t want to be applying data to the In Area that will be confused with real-world data Input. Then, back at 1AFB, a loop begins that uses the INY pair to look at IN#s on this page of Ns. First we check for a wrap — once we’re back to ourself, AOL is done. Then another JMP (this thing could be speeded up if you wanted to do some careful typing) is used to guard against hits to the In Area, if on the In Page. From here we can find that we’re done, if this N is in the In Area, and its AOL search has wrapped into the In Area. Primarily, the next JMP is back to the AOL routine, with its page-bound branches, at 1B29. Here we test for the IN# activity of the prospective N candidate, to see if it can go on our list. The CMP #01 NOP can easily be converted to CMP 1AC0, if you want to require that the N be stimulated enough to FIRE, not just stimulated above zero. If it’s quiet, we loop back to the INY pair at 1AFC to try the next N on the page. If it’s active, we check our list to see if it’s already there. If it is, we branch to the SWAP routine. If it’s not, we swap it into STORE on the list. The SWAP routine at 1B12 raises the position of the list element, swapping it with the element above, unless it’s already at the top of the list. If it is, or after the swap is done, we branch back to 1AFB to resume the search for a new element. If no new element is detected, a wrap will take us out of AOL and into FIRES. If a new element is detected, the Dup Chk loop lets us go on to 1B3A where the entry at the second to the lowest position is bumped down in place of the bottom one; and the new one is added in at the second to bottom position. This way, the newest learning has a better chance to involve more than one new element; and to work them up the list. This is all guesswork, and here it might be better to go in at the third or fourth rung from the bottom. The higher you enter, the more time you’ll spend shifting old entries down, in order to maintain their relative prominences.

 

FIRES

 

STORE leads to FIRES at 1B48, which involves hitting the Ns on this page as listed in the current N’s Out L list, handling part of the KHS routine, if the current N is in the Hi Area Input page, Out L Stimming the N 16 places behind, Out M stimming and un-stimming the the like-LSB Ns on the page ahead or behind, and stimming the mirror-related like-LSB N on the Out side if this N is on the In side.

The Out M routine starts out with a branch tree that uses the principle of halves to quickly determine which column type we’re in. The outside columns don’t require any I/O boarder testing; Out M stim traffic is allowed either way from any page. The I/O driver columns hit harder, outside the Hi Area, so there are five types of columns; forward or reverse, with and without I/O boarder testing, and the I/O Drivers.

 

TIRE

 

After all the FIRES, it’s time to decrement the Xth status. If the N is tired, its IN# is cleared, and the Xth slot below the Out L list is reset for the next count down; then we repeat the N loop for the next N in the scan. If the N was not tired, its IN# is LSRed three times, before returning to the N Loop. This might seem harsh, but it is necessary to bring the system into regulation, with all these Ns latching on to each other. However, I cannot say that the adjustments I’ve come to are optimal for learning.

Another form of Ns Tire exists in the overall system. It is possible for the more active Ns to be over-stimulated — to have their IN# increased beyond #FF. When this happens, the IN# wraps, which can amount to a reduction in value below the threshold to FIRE. This quirk is evenly applied to the whole system; as is the Ns Tire routine. To leave it be is to have a simpler, faster system.

  beepers contents

KHS

 

The KHS and Dual Regulation sub-systems are the most difficult chains of command to follow, in the confined space of this little computer. They run into remnants of other systems and they are fragmented and tangled with each other, the N Loop, and the Main Loop, just enough to give me heartburn and a headache simultaneously.

KHS began with the comparatively innocent idea of simply stimulating the Hi Area a little whenever the whole system went quiet. HS became KHS when the idea struck to repeat whatever the last pattern was in the first six Ns of the Hi Area Input page. The size of this key was then increased from six elements to forty. Then a scratch pad was involved to allow stronger candidates first access to openings in the key. Then KHS was adapted to the current FM implementation.

First, let’s look at the frequent part of KHS. This happens in the N Loop, every time an N that will FIRE is handled. We want to see if this N could be a candidate for any openings in the Key array, or if it is an active ongoing element. At 1BF5 we enter the KHS N component routine, after just having hit all the Ns on this N’s AOL list. We check to see if this N is on the In Page of the Hi Area; page #33 (page #s are address MSBs, in hex). If not, we exit KHS. If so, we check "I," the index for the Key, to see if the key is full. If so, we exit KHS; nothing can be done to the key array when it is full and the Ns it designates are busy. As Ns tire, if they are not kept going by a combination of the activity of the rest of the Hi Area Input Page, or Out Ms from adjacent pages, or KHS, then they will fall out of the Key, creating openings for new elements. This develops out of the interplay of routines, as a function of the interplay of memory-driven waves, built from world experience. It is handled by other parts of the KHS system that operate on Main Loop time frames. These sub-routines support each other from different time frames and points in the cycles.

If the I index indicated openings in the Key, then the KHS N component routine branches to 1378, to handle the scratch pad. We are still operating within the time frame of a single N, located on the Input Page of the Hi Area. We enter a process of determining whether this N is already a Key element, or a prospective new element. To do so, we have kept a copy of the Key, as it was used in the last cycle. We also saved the index, that was used with that Key, "SI," which points to the correct starting position for reading the old Key elements from the Saved Key array memory area. We load X with this pointer at 1378. It is first INCed to prepare for a loop test at 137F. At 137C, A is loaded with our N’s address LSB, then the loop begins at 137E with DEX. At 1381 we compare this N’s address with each element in the old Key. If it is not equal, we keep looking, branching back from 1384 to 137E. If we find a match, we INC the new I, for the new Key we are building, and JMP back to 1B6F. Here, X is loaded with the new I, A is loaded with the N’s address LSB, and at 1B74 the N’s address is inserted into the Ith element of the new Key array. This ending to the handling of the KHS N Component routine is the case where positive feedback is supported as fundamental hippocampal action. A Key element from the old Key is carried over to the next cycle to continue as a Key member; as continued stimulus to that N on the Input Page of the Hi Area. The position of the element in the Key array may be lower than it was for the last cycle; if Ns Tired and dropped out; but position in the Key array is irrelevant; it’s just a list of addresses — the addresses are the positions of stimulation of the Hi Area Input Page. Note that the Key array position can only move lower since Ns can fall out, but new elements are not added until later — when all the dust has settled. This means that every active member from the old Key has a chance to get on the new Key before any prospective new elements get a chance to fight over openings.

Back at 137F; in the case that no match was found, this N is a potential new element, so we branch to 138C to try and enter it as a candidate. First we check to see if the scratch pad is full by checking the scratch pad’s own index "IS." Here, the CMP# is $50, instead of $28 as it was for the Key array index I, even though the pad is the same size as the Key in terms of Ns; because the scratch pad holds two descriptors for each N, while the Key array holds single descriptors; and $28+$28=$50 in hex (40+40=80). The Key array is simply a list of the addresses used when we Fire the Key, from another routine in the KHS system. The scratch pad is also a list of addresses; but these Ns, in that same page, are only candidates. The purpose of the scratch pad is to see which candidates from that page are the strongest; i.e., which have accumulated the highest IN#. We won’t know until we’ve processed the whole page, so we need a scratch pad on which to accumulate the information for later judgment. Each N address in the scratch pad array is immediately followed by its IN#, taken before the reduction that comes from FIREing the N.

If the IS index for the scratch pad indicates no room available, we exit KHS with a BSC to 13BF, which is a JMP 1B77, just past the KHS N Component routine in the N Loop. If the scratch pad was not full, at 1393 we TAY to index the IN# on this N this cycle. We hold this value in 135B for upcoming use. We load 135A, which holds the highest such IN# value so far this cycle. We compare this N’s IN# with that max to see if the max needs updating, and do so if necessary. Now we TYA to put this N’s address LSB back in A. X gets the current IS, and then INX to put the N’s address on the scratch pad, and INX to the next position on the pad. STX saves the new IS. LDA 135B retrieves the IN# for this N, and line 13BC puts that on the scratch pad, right after the N’s address entry. At 13BF we JMP to 1B77, out of the KHS N Component routine and on with the N loop.

 

By the time the N Loop has scanned through the Input Page of the Hi Area, all of its Ns, that were going to FIRE, have fired. But the matrix is already affecting the IN#s on various Ns there with Out L stims; and Out M stims will be coming from the next page. On the next cycle, Out M stims from the prior page will also up some IN#s there. That next cycle begins with page 20, however, which is part of the High Area, three pages beyond its Input Page. When the N Loop scan gets to page 23, we have exited the Hi Area, and for purposes of another system — Dual Regulation — we have dropped out of the N Loop to take care of some timing and regulatory business. This happens to be a good time to handle some more of the KHS system. The first thing we’ll do is Fire the Key. This happens, starting at 1D3C. How we get there goes like this: The N Loop at 1A8F is scanning the Ns. When it bumps MSBs, it JMPs to 132B. When the MSB has become 23, we’ll be taking branches to 1342, here, that JMPs to 13EA. Then the branches include a test for 23, at 13F0. At 13F2 we’ll BEQ to 143E, which JMPs to 1D3A. 1D3A is CPX #00, which is a harmless remnant of a simpler system. It serves as a reminder of how you can skip hippocampal stimulation when the Hi Area is active. At 1D3C, we load A with 33, the MSB to designate the Hi Area Input Page.

Though we have dropped out in the midst of the N Loop, the time frame for what’s being done now is a main loop cycle. We haven't exited the N Loop to return to the Main Loop; we are doing stuff that happens once every main loop cycle, and we will eventually resume with the N Loop, right where we cut out. The time is right because we’ve finished processing the Hi Area, and the current implementation assumes that the Hi Area is the primary subject of hippocampal stimulation. Hi Area processing begins on page 33, and extends through page 35, then is temporarily suspended for the Main Loop business, and resumes at the start of the N Loop on page 20, to continue through page 22. When page 23 arrives, we drop out for various reasons, KHS being one of them.

To stimulate the Hi Area Input Page, at 1D3E we load 4B with #33, to designate page 33 as the page of dendrites we want access to. Now all we have to do is run a little loop that reads the addresses and performs the hits.

At 1D40, X is set to I, which points to the first element we’ll use from the Key array, that currently spans the range of 1627+1 to 1627+I. At 1D43, we retrieve an address, then it is TAY to load A with the IN# at that address. At 1D4A we add #0B to that IN#, then STA it back. At 1D4F we check to see if this loop has finished its hits. If so, we load 4B with #23, so as to restore the N Loop for our later return.

Now that we have used I, it changes status to become SI, the saved I. It is put in 1508. 1507, the new I, is set to #00. (X is also set to zero, for purposes of monitoring activity in areas. If no Ns are handled by the N Loop, X is not affected; so it can serve as a flag for activity in sections of Ns. This technique is a remnant from simpler systems, that used X as a flag; because it saved a step in the N Loop. We just finished the Hi Area, so X gets reset. If other routines use it, they will hold it and restore it, until its purpose is met.)

Next, a JMP to 1360 takes us to a little loop that saves the Key we just used to the Old Key array, at FF9B+1 through FF9B+28 — we just save the whole array, and rely on SI to point to the correct start. Now a JMP to 14A3 takes us to Dual Regulation business. After this is taken care of, at 14CF a JMP 1A8D resumes the N Loop at the start of page 23.

 

The rest of the KHS system is handled at the start of page 34, just after the N Loop has processed the Hi Area Input Page; the focus of this KHS routine. How we get there goes like this: The N Loop bumps its MSB to 34 at 1A97 and JMPs to 132B for section tests. There, the branches lead to 1342 for a JMP to 13EA, where 13F4 takes up the N Loop’s job of watching for the end of the scan — an MSB bump to #36. This continues after a JMP to 1D16. There, if it were #36, the X flag would be handled, and a JMP back to 1AA3 would reset the N Loop for its next scan, and continue with the End of N Loop business. Since the MSB bump was to #34, at 1D16 we branch to 1D20, to check for #34. When this isn’t true, the branches restore normal N Loop operation, with a JMP to 1AA1. Since we have just bumped to #34, we arrive at 1D24, and load A with the I index to the Key. At this point in time, the I points to Key elements that are carried over from the prior cycle — prospective candidates have not been added yet. If I is zero, we’ve somehow lost the thrust of our positive feedback hippocampal action. At the same time, we haven't modified the Key. So, we can simply restore I with the saved I, SI, which points to the size of the last Key we used, that had size. Now a JMP to 1F4B takes us to the final bit of KHS system.

At this point we will attempt to expand the Key with any prospective candidate elements. If there are more openings than candidates, they will all go in; otherwise, the ones with highest IN# go in first. Note that positive feedback hippocampal action is supported by granting prior Key elements most favored status — they go on regardless of current IN#, by virtue of having been there already; unless their IN# goes below Firing status. This simulates the tiring of Ns in the hippocampus. Stand-bys may be set up to carry on the element’s job; or more important factors may, this way, get the opportunity to take over. Note also, that in this system, in the latter case, the criteria for prominence is gathered at the moment of Ns FIREing — the IN#s of candidates are gathered at the same point in time where the decision is made as to whether they should FIRE or not; which is a function of IN# competition under threshold adjustment for regulation.

At 1F4D A is loaded with I. If I indicates a full key, we don’t attempt to add new elements, and the branch takes us to some resets, and a return to the N Loop.

If the Key has room, we check IS, the index for the scratch pad, to see if there are any potential new elements. If there are none, the branch is directly to the JMP 1A8D back into the N Loop at page #34. Since the scratch pad is empty, nothing needs to be reset first.

Otherwise, we begin the process of stuffing the Key with info from the scratch pad. We load A with 135A, which is a record of the highest IN# of Ns in the pad. We load Y with IS, the index to the current end of the pad. We enter a loop that skips every other (address) entry, looking for an IN# entry that matches the max value. When we find one, we branch to 1F70 and DEY to read the address for that max IN#. Now X is loaded with I, the index to the current condition of our new Key. We INX to position the new entry, but STX 1507 to update the new I first. After the entry, we check X to see if we’ve filled the Key. If so, we branch to our resets before exiting KHS. Y is used to set the max IN# and IS, index to the scratch pad, back to zero for the upcoming repetition of the routines.

If the Key was not full yet, the branch is back to 1F65 with A loaded with the max IN# again. A single DEY at 1F65 brings us to the next IN# to check, since the Key-stuffing loop we were just in used a DEY to get the address. We continue to stuff the Key with elements that have IN#s that match the max record. At 1F66, we check to see if we’ve run out of Ys. When we have, we’ve used all the max IN# entries. It’s time to try and stuff in the next best thing. At 1F68, the max # is DECed. At 1F6B we check to see if its DECed out. When it is, there’s nothing left in the pad, and Y is zero. The max is already DECed to zero, so we only reset IS before exiting KHS.

When the max record has not been reduced to zero at 1F68, a JMP to 1F59 continues the process all over again, until an exit is reached.

I apologize for the mess, and hope you can appreciate my desire to walk away from the whole thing and start over again in the PC, rather than clean it all up. My thought is that there may be some people out there who will appreciate the communication of these ideas, for integration with their own development of systems, in more appropriate media. This way, you didn’t get the message a year or two later.

  beepers contents

Dual Area Regulation

 

Regulation is accomplished by timing the scan of sections of the matrix, and altering the sensitivity to FIRE for all the Ns in a given area. The two areas so adjusted are the Hi Area and the total matrix. The Hi Area is allowed to take a certain range of time before adjustment is imposed. The total matrix is given a range that starts where the Hi Area’s ends, and goes about double. There are many adjustment factors in the overall system that can affect regulation, and defeat it. When all is well, the system stays within a range of overall cycle times of about 1/3 to 1/20 second. The Hi Area is kept active for a great majority of the cycles.

The Hi Area is timed in two sections. It starts on page 33, and runs through page 35, where we run out of pages. This first half of the Hi Area time measurement is stored; then a bunch of "End of N Loop Business" is taken care of. When the N Loop starts up again, the timer has been reset, and we measure the second half of the Hi Area; pages 20 through 22. Then the two half-measurements are added together for use by the regulator.

The timer is reset at the start of page 23, and read at the end of page 32, as the Peripheral measurement. This figure is added to the total Hi Area figure to obtain the total cycle time used to regulate the Peripheral Area. The Hi Area can grant its time to the Peripheral Area, but it can also take half of the total time; before correction is imposed.

The routine modifies the sensitivity of Ns by changing the value encountered by the N Loop at 1AC0, in the instruction at 1ABF. This CMP# determines whether an N has been stimulated enough to FIRE.

The value placed in 1AC0 is one that was determined in a prior cycle. The routine always sets a value there, twice per Main Loop cycle; once at the start of page 23, before running the N Loop through the peripheral Area, and again at the start of page 33, to run the N Loop through the Hi Area. The two areas have their own independent setting, that is stored and retrieved each cycle. This value is modified as deemed necessary by the amount of time used — it is not a time measurement itself; it’s a threshold to FIRE.

The Hi Area time is stored when Peripheral page 23 starts. It is retrieved when Hi Area page 33 is about to begin; and used to test the need for regulation. If needed, the modification is stored for possible use in more than one cycle; then the stored value is placed in 1AC0, regardless of whether it changed or not. This is also the point in time when the Peripheral time measurement is stored.

Just before the Peripheral Area starts, the first 1/2 and second 1/2 of the Hi Area time are added together, and saved for later use. The value is added to the last saved Peripheral value, and this overall total is used to test the need for regulation of the Peripheral Area. If needed, the modification is stored for possible use in more than one cycle, then placed in 1AC0.

  beepers contents

KHS, Regulation and Consciousness

 

If neurons have a duty cycle on the order of .1%, then one would expect a complex neurological system to involve regulation, that would "dole out" the use of those Ns; to insure that the system is ready to handle the tests of survival. When fervent tests arrive, we may allow greater participation. To compensate; the general ongoing condition might be stingy. The instinct to find refuge after trauma, would add to survivability. Evolution could tap this vein with the complexities of sleep cycles.

A basically stingy condition might mean that neurons only participate at about 10% of their capacity, most of the time. It might follow then, that the actual average duty cycle is on the order of 1%. In this case, the tests of survival would have molded our system to provide for 100 stand-by sub-systems to take over, as needed. This means a properly organized beeper might compare with a neurological sub-system having as many as 1/4 million neurons! As you look at more complex thinking, though; having a hundred times as many Ns affords a hundred times the subtle variety of elements that might become involved as participants in any given instant of neuron pattern. This does not yield a hundred times the potential number of instantaneous thought components — it’s astronomical — its more like 100x, where x is the typical number of participating N elements, contributing to the relative meaning that is the consciousness, at any given "instant," on the order of 1/1000 second. (Actually, I imagine the math is more complex than this — the example is given to demonstrate the general manner in which we might have to deal with the numbers.) On the other hand, how much potential variety do you need? How much do you actually use? And how does its availability bear on the degree of awareness? The pattern must change to induce temporal meaning; but couldn’t it just change back and forth between two states? You probably can’t derive much meaning out of such a simple system, regardless of how many elements of "resolution" you involve. Look at the meaning that can be represented using only 24 letters and a space! "24" is misleading, here, however, since a letter is a complex pattern itself. The lowliest computer uses a field with 264 potential dot patterns to handle it. But, it does this with only 64 bits. So you might say that the "24" above should be replaced with "64;" still a very low number.

The point is that you gather meaning out of patterns over time. With only 64 potential dots, you could say everything that can be said verbally. In fact, you can do it while restricting the dots to 24 patterns, plus punctuation. What you need with the patterns is time, to produce order, which has meaning. The paper and ink is not conscious, but your mind is, because there the patterns change with time.

As the system under consideration is expanded, the resolution of patterns is also. Our "mind’s eye" must involve at least a million simultaneous neurons per about 1/30th of a second, to explain the clarity of our vision (though the eye evidently only supplies about 10% of this info — the transition level / 10 times feedback processing must supply the missing pieces, from experience). But the ability to order these patterns into sequences that hold meaning is the primary factor that supports consciousness. If you wanted to think the same thought all the time, you might be able to do it with something like a beeper; and have near the clarity and presence that your brain carries on into tremendous variety.

I have not aimed for that with these beepers, though. The instantaneous resolution is probably on the order of a few hundred Ns. But there are 2816 Ns that can be involved, where each one is defined by 15 variable Out L slots, that are each a 128-point choice (with no duplicates). The potential for patterns in these beepers is quite sufficiently astronomical, so as to cause me to feel that this particular factor is not an issue. The number is (N states)2816, where the N states would be (slot states)# slots or 12815, if duplicates were allowed in AOL. Furthermore, to define the consciousness, you would need to select a time frame for its "life." The prior number becomes an exponent for the number of pattern frames that occur during the given life frame. A definition such as this reflects the potential for behavior, and corresponds to a calculation that would cover the possibilities involved with all synaptic weight combinations. There are practical restrictions that whittle the number down a bit; the most significant of which is world conditions of process.

One might ask if this means a movie screen is conscious. No. A movie screen does not develop meaning to itself because it does not control the flow of patterns out of memory. Neither does the projector. These systems are only gates, pre-controlled by people. The kind of meaning that is consciousness is the kind of meaning that develops out of world interaction. The projector has mass, that is supporting a kind of memory process, and the screen portrays patterns of relative meaning; but the whole thing falls short in dimensionality, with respect to consciousness. Consciousness involves the constant potential for interaction, in every detail of the stream of patterning. The movie process is pre-fixed and one-way. The most you can do is interrupt it. The details of its patterns can not affect each other into modifications of overall behavior, in context to changing world events. This is the higher dimensionality of development. The kind of meaning that is consciousness is the kind surrounding, and including, decision. In a sense, it too is only gates — but the gates are organized in such a way that their system develops some control and responsibility over itself, out of experience. It’s not really all that different; but it’s a difference that gives the meaning its awareness.

The brain is a medium that supports decision making. The medium is composed of protons, electrons and neutrons; just like the movie screen. It is no more conscious than the movie screen. The patterns imposed in the brain medium are what develop consciousness; as they become allowed to interact with the world. The physical brain is the location of the source of times that support your given viewpoint. The brain allows or supports the development of world interaction; but it’s the relative meaning, of the complex reactions, that is the consciousness. Developmental meaning is the dimensional outgrowth of time, out of the mass-based energetic system of gates that supports it, amidst its world of interplay. It cannot exist without its sub-dimensional medium; but a living brain can be unconscious if it is not running pattern activity with relative meaning, in response to the conditions of the day’s events, that it modifies.

Conscious meaning exists relative to a base of memory molded by change in the world. Some of this change has resulted from the behavior of the system having that memory. This relational information is included with the influx of world data. Such meaning we take for granted — but it is the stuff which is conscious induction from the parallel series data stream, as an entity with meaning relative to itself and the world.

If you are sleep-walking (with your hippocampus off line?), your consciousness will not be of its usual keenness; and this will be reflected in your decision making ability. Mostly, you won’t have made many long-term memory entries; so your inability to recall the event will cause you to believe you were unconscious. Consciousness peaks "at," or with reference to, the decision making process — over time, with memory.

I suspect that decision making is primarily supported by Key stimulation from the hippocampus. It’s not the hippo alone that decides; it’s the interplay between it and the cortex, under regulation, that forces world-derived patterns into stratums of greater and lesser usefulness with respect to the current world conditions. Decision making will be impaired when the hippocampus is lost. It will hardly develop when there is no hippo in the first place.

It is the innate potential for evolution, as it fundamentally exists in matter-space-time-logic, that leads to consciousness. The world consists of patterns. The potential exists for those patterns to take a course of development, because the logic of it all contains opportunities. The natural flow of logic is into its opportunities. It isn’t all participating at our level; but the trend is cast; consciousness is on the increase.

When you produce a decision, you react to circumstances. The circumstances may immediately involve the world, or only your thoughts. There is a greater or lesser length of time between the initiation of the circumstances, and the results you issue from decision. This intermediate time is spent considering options. Circumstances stimulate consciousness of the options. They can force immediate response, or they can force you to get deeply tangled in a web of options. Or you can get sick of dealing with things, and decide to just sit there and contemplate nothingness. This is still behavior produced by decision in response to conditions. About the only waking time that options aren’t involved at all is in primitive S-R behavior, such as jerking your hand off of something you didn’t realize was hot. You become conscious of actions such as these after you’ve initiated them.

Complex consideration consists of a tree of strung together associations. The world has impressed these associations into your memory. They naturally exist there in tree form, by virtue of their relative meaning, with respect to the development of past relative meanings. Today’s new associations become part of tomorrow’s tree. The particular series of events, that has led up to your present state of consideration, has shaped your priorities. The decisions you make are a function of the experiences you’ve had, the order you’ve had them in, and the overall biochemical condition of your consciousness-supporting medium at every step along the way.

 

Perhaps the most important message that could be received from this, is that you should consider what you want to spend your time considering. (In other words, forget this book and go have some fun!)

  beepers contents

Set-Up, Load and Run

 

To run the program you need the mic ear circuit, as well as a C128 and appropriate disk drive and monitor. The version with data will simply run — the blank version must be "born," which again might be done by simply running it. An understanding of the program would be desirable, so that minor adjustments can be performed during "growth."

 

From power on, or reset; set 40 column C128 mode.

MO{shift}N

L"0B",8

L"1B",8 10B00

L"000",8 } "000" & "100" are for blank version —

L"100",8 11300 } see directory of young version for #s

X

FAST (screen goes blank, type carefully)

MO{shift}N

G1934

 

 

A phrase the young ones are familiar with, in key-board labels, goes:

 

= @ = @ G = @

= @ = @ G = @

; L ; L ; L G = @

; L @ K

= @ = @ = @ = @

 

They tend to mimic the "= @" part, and the "G = @" part. They have the first few notes of "Over the Rainbow" in their distant history, by whistle, and may pick that up more easily than other things. They can assimilate whistling a lot better than the keyboard, because each note of the data can make a longer impression.

 

To shut down the system, without losing data:

 

RUN/STOP

{shift}CLR/HOME (screen is still blank, if in FAST mode)

{shift}CLR/HOME

X

SLOW (screen returns)

(format a disk; with Warp Speed cartridge installed the command is @F — I haven't checked if the commodore format command messes up the data)

MO{shift}N

S"0B",8 0B00 1000

S"1B",8 10B00 11000

S"0XX",8 1300 FE44 (the "XX" is your sequence number

S"1XX",8 11300 1FE44 — I use 36-idecimal)

To run in slow mode, from here:

 

G1934

 

 

To run in slow mode, from power on:

 

MO{shift}N

L"0B",8

L"1B",8 10B00

L"000",8

L"100",8 11300

G1934

 

To stop in slow mode:

 

RUN/STOP

{format disk}

S"0B",8 0B00 1000

S"1B",8 10B00 11000

S"0XX",8 1300 FE44

S"1XX",8 11300 1FE44

 

Remember; the slow mode will not function in a similar manner as the fast mode if the regulation timing is not doubled. The numbers to double are located at 1401, 1405, 145B, and 145F, in both banks.

  beepers contents

Mathematical Analysis

 

Sorry, I can’t give you a bunch of formulas. I haven't spent as much time developing this capacity as I probably should have. I have fallen into what feels for me to be an easy way in — approaching the subject in terms of relative logical function. Along these lines, the foregoing ML listing suffices, at least for now, to satisfy my needs for modeling my efforts, in reference to their development. For all I know, there may not be anything in the way of more standard math that would be more applicable to that endeavor. The system runs. Maybe there is no math that can describe that... the running system itself is it. The program listing is a kind of series of logical-mathematical statements, that support each other, and thereby support the flow of process within the computer-world system. They direct that the computer will maintain a certain degree of structure within its memory; but they have no means of directing the overall behavior so generated... this involves world variables.

I would not be surprised to learn that you cannot predict the real-world behavior of beepers. There are some 2128K•8 patterns a C128 can make, each frame; and this is the sort of jungle the beepers live in. Maybe you could pre-define the memory array, but there is no way of pre-defining uncontrolled, real-world interaction. The infinite complexity of the Universe precludes a prediction of that. There would, of course, be probabilistic statements that can be made. Most interesting to me would be the development of meaning vectors. This would be a mathematical description of consciousness, at least in terms of fleeting components of it; or predictions pertaining to its general characteristics, in the long run. The fundamental structure of such a mathematical approach would probably have to involve the number of Ns, their size, their reach, their speed, and a real curve ball; their organization. Still, I suspect that we won’t have much without also including the even more horrendously complex factor of memory content. It’s like designing a race car, without having chemical reactions defined for the fuel system; while the object is to predict speed. The memory is the most important set of variables, with respect to world behavior predictability. You need to know the exact relative weights of every detail of past experiences; and you need to know the exact relative timing presented by all ongoing details of the environment. That’s the problem. The meaning vector also includes the relative values of constantly updating variables within the above list, over time. It gets conscious because these relationships "see each other" over time thanks to the "holding capacity" of the system, or the component inferred from the past states, producing this unique, constantly changing memory definition of enact-able meaning. You see what I mean? Only the running system is a description of itself, and what happens.

 

Reality is composed of logic. Logic is all the "stuff." The smallest amount of stuff is a really really whole lot of logic. It is logical that you would develop, and be logical.

 

Mathematics is a language that develops out of the logic of reality. Language itself is logical associated relationships. Mathematics is essentially trying to describe itself. The Universe is all of the true logical relationships, including the errant perceptions that fall away from development. The process produces unique logical relationships. Each one is a seemingly imperfect reference frame of the total perfect system. In completing the total system, each component is really perfect. There is no reason not to love the Universe.

  beepers contents

Beepers Improvements

 

There isn’t much room for improvement of the beepers in the C128. Development of this system has just barely begun though. The PC should afford much opportunity to develop a next step.

The beepers will be transferred to PC, pretty much as is (with some "brakes on"), as a test of my acclimation to the new environment. Then, they will be expanded to approximately three times as many Ns, at three times the size of N. I’m hoping this will allow the system (SLC2-66) to run about ten times as fast.


[Update 7/2002... I never got the kind of time needed to work on this with the
SLC2-66... I have finally gotten started, using a PII-333 to develop a system intended
for a P4 @ 2 or 3 GHz. Hopefully I will have some progress to report some time next year.]

[Update 11/2004... the beginning of a version for a 2004 PC is available on
the  Downloads page.]

Since RAM is not a factor, I will experiment with indexing schemes, to see if the N complement can be increased by a full order of magnitude. I want their reach to be the square root of their total number, and their Out list to be about 10% of their reach. The latter figure is just a guess, while the former figure stems from observations in the literature. Perhaps the latter figure should also be the square root. Along these lines, to better parallel neural organization, there are indications that the bi-directional columns should differ in their Out M page offset, so as to have as many offsets available as there are pages. This may not be such a general thing, however. The neural data may give us reasons to restrict certain kinds of access to certain sections of memory.

 

The most fundamental structural concept may not be getting applied here too well; the geometric progression of feed-forward-feed-back levels is not obviously present. I have reasoned that it is vaguely there thusly: we have our In Area / Out Area, followed by our In Page / Out Page, followed by the Peripheral Area, followed by a rather skimpy Hi Area. It would be nice to make the Hi Area about ten times as big as the Peripheral Area. It may seem that the mechanism of ten-times-distributed-feedback is not present. At least part of it is, as the feedback impression is available to every N of a given page. The tags that are set up, sit there to be latched on to by any N of a given page. In effect, there is a real advantage here — with only 1X feedback there is complete distribution. The biological neural net must allocate physical neurons in numbers to facilitate distribution. This could be a significant factor in comparative considerations of neural count. Organization is very important. I intend to study the question of geometric progression, to better convince myself that it is being facilitated in the developing model.

The answer to faithfully facilitating the geometric progression may lie in the application of the above Out M extensions. This expansion of Out M reach could come in sections of a little, more, and a lot, for example. The shortest reach is from the smallest section, feeding into the next larger, longer reaching section, which feeds into the largest section, having reach to all, or most of, itself. Each successive section would also have the return reach of 1X distributed feedback, only to itself and the pages of the smaller section just before it.

I also have a nagging feeling that numbers and organization need to be adjusted to bring the handling of I/O data into a more reliable, deeper memory, sort of mode. I think we’re only handling fragments of the ongoing train of world information; and that association of those fragments is limited to very short trains. A better parallel is needed between the first stages of I/O and the neural success. This may involve the strategic location of some larger Ns. It is possible, however, that the solution here will come automatically with the even expansion of N population, at all levels of the geometric progression.

I am most looking forward to raising the level of quality with which data is presented. I think the little C128-sized beepers could have done much better if the system could have handled the data about ten times as fast, and done so in terms of relative amplitude of multiple frequencies. The next ear may be something like a 16 channel spectrum analyzer. The channels would be simultaneous frequency centers, with relative weights in terms of amplitude of those frequencies. The highest channel will be a high pass filter, rather than a narrow peak. This will correspond to an output capacity for green noise. The rest of the channels will be geared toward speech tone components. Pure-tone whistle behavior may still be possible, probably in two simultaneous voices. The relative pitch of those tones may be fixed by voice synthesis considerations. A factor of rapid vibrato may also be made available.

It is tempting to make use of voice synthesis technology. In the extreme, whole words could be called up by way of 8 or 9 bit codes. The artificial intelligence (AI) system would then only be concerned with attaining control over the flow of higher reasoning. I suspect that the system would have inadequate access to its I/O to develop that power of reason. It seems like this would require too critical a handling of essential details — getting a single bit wrong in designating a word would yield an entirely different word. I think we need to expect these errors; and expect them in great abundance at the start of learning. A single-bit error should amount to a minor variation within a small portion of the time of a single word. It would be nice to have a voice synthesizer who’s component phonics could be called up in such a way. But another essential ingredient to successful learning is a reliable relationship between such generation and differentiated audition. Therefore, I will be putting prospective electronics on the bench, and testing it through a direct loop for audible legibility. Such testing may reveal that more or less than 16 channels are needed. I may begin with as few as 4.

A range of high speed vibrato could be detected in the envelope as a single channel. Two channels would facilitate a range of rates, over a range of absolute amplitudes. Another two channels could handle pitch and amplitude of the fundamental voice tone. The speech generator Out side might automatically include harmonic components, or tandem frequency relationships that aren’t detected, but serve to make the output more intelligible to humans. A fifth channel for output as well as detection could handle instantaneous amplitudes of audio noise.

I suspect that the use of fewer channels will improve higher learning capacities, by impacting on the utilization of the Ns to the minimum degree necessary for reliable handling of voice components. The approach in mind here is to make a complete variety of voice commands available, and discretely detectable, while leaving the maximum possible proportion of matrix free to handle successive levels of association. The overall system might work better, however, if this approach is multiplied redundantly, to operate in parallel configurations of near- identical outsets. Perhaps these frequency centers should all be slightly different. The thought here is to facilitate the production of answers as averages of similar parallel series trains. The inherent differences might enhance the production of unique synthesis, under slightly varying contexts of re-visited trials. In other words, we may find it advantageous to emulate neurological buck-passing redundancies. Or, the above sort of scheme may exist in addition to that stand-in capacity.

I have a nice book on the neurological anatomy of hearing, that I haven't cracked yet; because every such new book has driven me to re-work the system. I had already postponed writing this book for too long, and that project really belongs to a new phase, in the PC. I will need a similar anatomy book dealing with speech. These books could alter the PC project in ways I cannot now elucidate.

 

This leaves us with considerations of structures for the support of higher order thought. Here we will attempt to parallel the hippocampus, the temporal-frontal mechanism, and the limbic system. I think this complex set of subsystems works as part of one system including the cortex, and the rest of the brain, to give us prioritization-emotion-chronology. We will start simply, and attempt to get more complex.

Developing the hippocampus will start as a focus on prioritization. The capacity of the Key will no doubt increase, and its distribution will aim to parallel the locations of our real positive feedback extensions. The beepers’ hippo was coarsely modeled after mouse cortex. More detail is probably called for in its mechanics.

Computer chemistry is fixed. This implies that AI may never get emotional. At this time, my assumption is that emotion is a matter of context for the ongoing thought process. For humans it involves a number of fundamental autonomic components, that are involved with such things as heart-rate, breathing, perspiration, percent of neural participation, crying, etc. It also involves behavior routines like laughing and running; and repairing damage — and the attendant recognition of long term consequences. The perception of these data packages plays a large part in the experience of emotion. Most of this data will not be readily available to the PC. But another important variety of factors, involved with emotion, might be available as parallel logical function.

I suspect that the brain side of the blood barrier is an ongoing mix of chemical factors contributed to by a given limbic-autonomic condition; and that this chemistry serves to adjust the participation potential of synapses into a context that parallels past experiences. In this way, the conditions of present experience can serve to cue awareness into appropriate survival stances. Brain chemistry adjusts the frame that your mind operates in. There are certain frames that are always emphasized by certain sets of autonomic conditions; by such things as pleasure and pain. The neural messages of pleasure and pain serve to stimulate the surviving associated autonomic and blood chemistry responses. The perceivable qualities of autonomic response are very similar in either case. What’s different is the categorization of mental association. Pain ends up getting more prominently associated with all the past experiences of pain, while pleasure is more associated with all the pleasure; thanks to the reliable difference in chemical influence produced by the trials of life.

We can approach a parallel systemology here by making a portion of the matrix convertible, with respect to its memory content. At the same time we will consider ways of keying category tree levels of alternative memory set combinations to assessments of the character of ongoing experience. At a finer level of detail, this systemology may also support improved temporal performance.

The system would be supported by a hard drive. Emotional disposition, as well as the context of a given train of thought in general, are not required to change from one split second to the next; yet parts of the RAM could be modified that quickly, from the hard drive. Full application of the concept would involve an intermediate RAM drive, using perhaps ten times the RAM as that of the active Ns. The convertible portion of the active Ns would be less than the total active matrix. The convertible area would be accessed like a utility by the more constant, larger frame of the general matrix. That frame would dictate what part of the RAM drive would be accessed at what time. A longer term product of the process would issue orders to the hard drive to alter part, or all, of the RAM drive contents. The former process would attempt to approach a parallel to the support of longer term temporal sequencing, while the latter action would parallel emotional framing. Both types of orders might be generated through our bigger and better Key, which would be analyzed to address the RAM drive and hard drive. Or, another Key system or two would be developed to handle these jobs. These systems would relate to each other, and the general matrix, to simplistically parallel the system primarily encompassed by the hippocampus, the limbic system, and the temporal and frontal lobes.

 

A long term recollection of reality is always "pieced together." We never recall reality as though we can push a playback button of perfect, complete chronological events. We attempt to do this, and little snippets of accurate portrayals of our experience can be temporarily stimulated. In general, when asked or required to remember things, we get there by association. We think of more details later, through retrospect of where our story goes, to further stimulate associated aspects; and more or less jump around in time to fill in the gaps, or increase the detail. After doing so, we may then be prepared to verbalize the train of events in a more straight forward manner. The degree of jumble here is of course a function of many things like story length, depth and age; and whether, or how often, the story as been told before.

Associations lead to associations, as we work our way through the story. This is also the case when we try to remember what it is we need to do. The latter is a sort of framework that runs the former when asked to remember past portions of history.

Evidence in the literature has brought me to imagine that the temporal area is used to store keys. Perhaps the hippocampus-cortex indexes these keys by association, or by the special case for association we call chronology, where various landmarks of the past are used to work our way to a particular time. The system, or the hippo itself, transfers a short sequence of key into a portion of itself, where it can act in conjunction with the overall key to stimulate an associative parallel sequence in the cortex. When this runs out, the system would again turn to the temporal area for the next bit of key; this time definitely operating in the chronological associative mode. Chronologies may be facilitated through the association of experiences with reference logic progressions. This would amount to a scanning system that runs during experiences; and is re-run to stimulate chronological adherence of associative recall. Perhaps one function of the limbic system is to trigger such a scan during "record" of experiences that register as highlights, or as being more important to survival. Perhaps dream time re-enforces most of these scan/perception associations. A similar system could run in abbreviated intermittent form to fetch a slow series of key fragments from the temporal area, for use by the hippo.

It is important to remember that we aren’t trying to create an actual recording. The philosophy here is based on brain function by association. Perception is that combination-unlock into your mind. The immense variety of potential patterns leads us to believe that our mind has limitless storage capacity. What’s really happening, is that we always piece the story together, about the same way, through association, as supported by occasional changes in the key stimulation. The keys are like a recipe with which to re-build limited chronology from familiar multi-purpose components, built from perception. The temporal mechanism may entail a compound application of this recipe-construction process; where a similar system, running more slowly, serves to produce the needed series of keys, or key-change factors, to the hippo and/or limbic stimulation system. Short stretches of chronology are permanently associated together in the cortex between these common pieces, in a multiplicity of arrangements, as laid down in past repetitions of similar experiences. The more different a given aspect of experience is, the more likely a key fragment will be generated for addition to the temporal collection; by association with the chronology of events as well as by association of qualities. That’s the position taken in the temporal matrix by a given key fragment. It’s a when-what combination lock. Applying either factor will tend to yield a problem of choosing the other factor. That problem passes to yield a key fragment choice, which is utilized, more or less tentatively, in handling its parent problem. The process becomes more adept as it is refined by feedback amidst the social environment.

The process of conscious induction in this model is not unlike electronic induction. It is the same sort of thing, taking place at a higher level of dimensionality. It is a process composed of relative sub-processes. The behavior of charge is interdependent interaction of our parallel series of atomic components. The primary factor elevating conscious induction above its more fundamental component inductions is the compounded presence of memory. There are more fundamental levels of time involved. Component induction serves to support pattern repetition. Without the component induction, there would be nothing to make the patterns out of, since they, like everything, are a definition of time, even when they depict a static quality for our notions. The patterns too, then, are available as components for higher reasoning, when the system has succeeded to excel by means of its temporal mechanism.

 

I will be considering a number of options in attempting to facilitate a chronological utility. It has become very valuable to be able to mentally and/or verbally "play back" a series of events. This is a fundamental component of higher reasoning and problem solving. We tend to do it repeatedly, too, while simultaneously "looking" for a particular aspect, relationship, or answer for another thought train. You could say that train with the need is dominant to the snippit, and the snippet runs by association out of the problem. The problem train leads to, and can get side-tracked in, the snippet. The snippet more or less modifies the problem train, as it continues on to require more snippets. Like a mental cell-building project, the problem train accumulates components from memory, as well as from the most recent memory — the environment. An even higher-order train is concerned with attempting to sequence the problem trains. This goal train is put on hold for even longer periods, as a given problem train runs its course. The goal train tends to get primarily involved with the environment. Social influence drives it to invoke its problem handling routines.

I suspect that my approach will view the goal train as a problem train that simply develops out of the long run. Both processes will be guided by the same key-train placement system, focused in the hippo, utilizing temporal key fragment storage. The goal train is just another problem train. It gets called up by conditions created by the environment, as well as by the course of problem trains. It is the learning induced by social example, that orders our problem trains.

Subordinate to this process then, will be the temporal utility. It must support accurate sequencing, as called for by the problem trains. This function would carry over to problem trains concerned with ordering the other problem trains, since this activity is just as chronological as any conscious process. Such a system might develop finesse. At this writing I imagine three ways to approach this. The method that seems most natural and simple also seems least reliable. The process may necessarily begin as an unreliable one, that becomes reliable through experience. It becomes accurate over time with feedback in context to social example and interaction.

For this system, a portion of the hippo key will be permanently designated as an addressing interface to the temporal section. That section will be a cross section, to some extent, of the general cortex. My first guess is that it will fall short of the first page(s) of I/O. I suspect that it should also be separated from the general cortex. It will probably be a separate, narrow version of the general cortex. Instead of environmentally oriented I/O, its I/O will be involved with the hippo interface.

To continue this parallelism, the high area of the temporal cortex could be used to generate a key fragment. That fragment would be installed in the hippo key, or would possibly act on the general cortex directly, as though the temporal area itself were a sort of hippo — maybe without the positive feed-back system. There are a number of considerations that come to mind here. This fragment could overlap with, or fully contain, the area of hippo that addresses the temporal area through its I/O interface. This question is nested within a larger one concerning hippo access to the general cortex. One or both of these sub-key areas may simultaneously interact with the general cortex, in the manner of the overall key. The answers to these questions may be critical to natural, successful mental function. It would be good to know the anatomical relationship between the hippo, the cortex, and the temporal cortex; and then later, to involve modeling for the frontal lobes and limbic system; and to know if, like the speech cortex, the temporal cortex has a unique "grain" or pattern of organization.

Another complex question involves the manner and timing with which the temporal key fragment would be transferred to the hippo. Perhaps this would involve yet another interface that can generate transfer orders. If so, this sub-system would require some sort of meaningful input with which it could learn to function correctly over time, with feedback. My first guess here, I think, will be to take the simpler natural path once again, allowing the system to run continuously. The temporal high area key fragment will be a constantly changing portion of the hippo key. To provide some hope that the system will learn to develop the chronological utility, that portion of the hippo will include some part of the hippo-temporal I/O interface data. The whole hippo key will interact with the general cortex, so that the actual chronology can cue the temporal system. In other words, the temporal utility involves an area of cortex just like the general cortex, but isolated from the environmental I/O influence, as well as from the general cortex itself. It’s a separated, smaller version of the general cortex, isolated to act as an appendage of the hippo. Its function would be to alter a part of the hippo key, in response to the condition of a related portion of key. Part, or all, of these special portions of the key would be affected by, and would affect, the action of the general cortex, in the same manner as the general hippo interaction.

A variation on this would involve stimulation of the general cortex, by the temporal cortex. Here, the temporal high area could generate the key, to be used to stimulate the general cortex, probably without the feedback arrangement of the hippo system. Again, a choice must be made to either let the system simply run continuously, or to somehow meter the stimulation events, preferably through a feedback event, rather than by timer. Also, a choice is required regarding just where this stimulation is placed in the cortex, with respect to the hippo extensions. The considerations for cortex-hippo-temporal I/O relationships would be the same. I think about brain modeling in this way, then I read some books to whittle away at the nested conglomeration of choices.

This first approach leans heavily on the principle of learning. It supposes that we learn how to reason with a minimum of specialized supporting brain architecture. The second approach would embellish on all of the considerations of the first, by including a system of timed stimulation, who’s association with the course of reality’s impingement on the system, would serve as a scanning mechanism. The scan is a logic sequence present during "record," so that its repetition will serve to conjure up a "playback." This smacks of resorting to technology to get our natural chronological utility. Though we are working in a computer, the honest attempt here is to model brain function with realistic parallel functions. There are indications in the literature that neural circuits produce oscillations for specific functions. The most pervasive example is our heartbeat. So, such recourse may be the more realistic approach.

The logic sequence could run for something like five minutes, then repeat. Each cycle could include one event that triggers the capture of a new key fragment that will remain in place for the next cycle. The old key fragments are stored in chronological order, so as to be invoked in order, as required. It is difficult to imagine a natural interface and medium for this. Perhaps this is where a compound function is involved. A very slow logic sequence is presented to the temporal area in association with the series of key fragments. This time, the logic sequence acts as the key. While it is always running in progression for the record function, it can simultaneously be shifted, as needed, to attempt access to a historical position in the fragment sequence. Since this process is part of the current chronology, its effects must also be recorded. The key fragment called up from history must be re-recorded as the current entry.

I am not motivated to spend too much time speculating about this sort of system. If this is the correct way to go, then it is a way that begs to be modeled with accurate specific details that must be gleaned from the anatomical data. The third approach seems almost blatantly technological. A portion of the key fragment could stand as a sequencing code. Again, there are abundant examples of similar technique operating in biology as DNA and RNA cell interactions.

 

A scanning function could be facilitated with a simple digital counter. The counter would constantly be running, forever counting upward, to facilitate association of a new data word, related to time, with whatever train of thought or perception is currently taking place. At one count per second, such a timer could run for 100 years if it had 32 bits — if it were implemented with an interface involving 32 Ns. It could run twice as fast, or last twice as long, if another N were added.

The interface would constantly be firing our count progression. For playback, perhaps this same set of Ns would alternate firing the count sequence of interest. I think we might want to withhold this playback function for the first few months of mental development, then gradually make it available through intermittent windows. This way, this part of our system would be less likely to simply develop as more of the general ongoing cortex-hippo system of current perception. We want the main high area, which is meant to parallel our association cortex, to discover its access to this other cortex, and develop control over it as a utility.

The interface for our high area is already in place as the hippo system. A section of the key can be designated as an on-off switch for the function. A few Ns should be averaged for this, rather than using a single less reliable N. Another section of the key can stand as a starting point for our playback counting sequence. This section could consist of a few sub-sections that, again, are averaged. Whatever sequence is developed here by the high area - hippo - regulation - world interaction, is the one used as a count starting point. It might be held there in the hippo key so long as the playback on-off switch is in the on mode. This would provide associative data to the high system during the playback sequence, improving the odds that the system will be able to repeat the elicitation under related circumstances.

On any overall system cycle where the on-off switch changes from off to on, the starting count is captured and sent to a playback count generator. This stimulus is applied to the temporal timing input, in alternation with the constant count generator. The whole hippo key is also constantly applied to the temporal cortex, at its data input, which is essentially an extension of the timing input; in such a manner as to insure reliable association. These inputs should probably exist, and fire, in redundant multiplicity, to improve associative reliability; in keeping with the philosophy of averaging that was applied to the key sub-sections (it should probably be applied to data/motor I/O as well). Obviously, there is a limit to how many snippets could be stored, even though we are only saving hippo recipe cards. As in the general case, those routines that are repeated more often, would be elevated to greater positions of longevity. This would be a good candidate for a convertible area involving hard drive support. Perhaps this area will be saved to a file that is labeled by its highest count sequence during record. Playback might arise from a separate cortex that is loaded from the hard drive, according to the playback starting count.

To bring the system full circle, we could project the high area of the temporal cortex into the general high area. We’d do this in such a way as to insure an associative effect on the hippo. Now our temporal cortex plays back to the general cortex at the same potential set of points that it stimulates during record. Record is a process associated with current perception. Playback will tend to "suck" info out of the perceptive ranks of general matrix, by association. The playback pattern should be more or less immune to regulation, to insure the re-creation of reliable snippet trains. Similarly, the temporal matrix itself must be placed into a set of conditions that echo those present during record.

To make this fully reliable, we could actually record the series of hippo keys. Those keys are filed by, and include, the 32 bit count sequence. The play command would continue use of one key for stim, directly to the high area, until a parallel count advances to the position of the next key in the stored sequence. The count might advance every time the hippo key changes; or, more likely, capacity restraints would require less frequent sampling; perhaps by way of requiring a certain minimum amount of change in the key. During periods of no playback, the current key should stim the playback area, so that future repetition will induce the desired tendency toward playback. We might add another drive every few years. To be more realistic, we could bump the oldest, least used members. Each entry could include a popularity-age code. The popularity part is advanced each time it’s used. As soon as one would wrap, they’re all halved, or the others are reduced by one. To bump an entry, we select the oldest member within the group of lowest popularity. We don’t start the bumping routine until the drive is full. We could save a sharper sample of keys if we use the bumping routine, as well as upgrade the drive every year.

Another system that would call for the use of another drive, would support a convertible area of cortex. This utility might develop as a sort of encyclopedia. It would interface along the same lines as the temporal system, but would not involve a chronological cue. We would not be recording a large sequence of small elements. The convertible area would be the complete data definition of a cross-section of cortex. Another section of sub-sections of the hippo key would determine which chunk of data should currently reside in the convertible area. The size of this address key, and of the cortex area, would be geared to the size of the drive. Any possible address request would correspond to an area in the drive, the same size as the convertible cortex.

Anytime a new set of data is requested, the last set is returned to the drive, in its updated condition. Every time we change data in the convertible area, operation of the entire system must be suspended. Therefore, we will want to involve a RAM drive buffer. When change requests are coming too frequently, they too could go into a buffer; though I suspect it will be preferable to average the intermediary requests, to stabilize the utility, and to better associate the generated request with the result, in a timely fashion. I feel that this would yield the best development of usefulness, through associative feedback.

For this system, the cross section of cortex would not be an isolated section, as proposed for the temporal area. It would be an actual cross-section of the general cortex. The section may extend all the way down to the pages just before the In and Out pages. However, my first approaches will all include the high area, and either the first or second one will be restricted completely within the high area. This version will be tried as a cross-section, "to the side;" but will also be tried as an upward appendage, on top of the high area. I think of this utility as an attempt to primarily parallel the frontal lobes.

 

This sort of systemology could be applied to the considerations for our temporal utility. The hippo key section, designated for addressing the temporal area, could act to swap data into the temporal cortex, as well as to be involved with the other proposed functions. The initiation of a temporal sequence, via the code in a hippo key sub-section, could stand as a starting point for longer-term trains. Some event could trigger successive data swaps from the hard drive, until the on-off event terminates use of the utility.

 

A third application for hard drive space would be an almost identical frontal system, meant to support emotional influence. About the only difference I see at this simplistic point in the considerations, is that the address segment would not come from the hippo; but rather, would be derived from survival sensibilities. The only pertinent sense to consider here would relate to degrees of failure and success in learning. There are no options concerning food, sex, mobility, cuts & scrapes, etc. I cannot now imagine a way to implement this utility, in a completely automatic way. The only mode that comes to mind involves parent feedback. We could have a set of buttons that stand as a grade scale; and push ‘em as we see fit, whenever we’re inspired to do so by the system behavior. I would suggest that all grades be regarded as degrees of positive feedback. The least positive feedback is no feedback. The higher grades are stronger doses of stimulation... a larger part of this convertible area would get swapped, for a longer period of time.

 

A simple utility, that could be added to the beepers, is a time reference. This would be a constant beat of perhaps one per second, applied reliably to a single N, or a few, somewhere off to the side... perhaps one near the In Page, one near the Out Page, and one in the High Area. The one-per-second reference would be derived from a real time clock. It would provide a sensibility for world time, and reduce the indebtedness to regulation and system flow, in obtaining any associative assessment of the nature of time frames. Additional Ns could be called to join the beat for emotion grades.

 

Perhaps a complex parallel truth involves a particular combination of the above features, together with a few others. Conversely, perhaps we’d be spinning our wheels here if we attempt to produce accurate sequencing influences with simple technological mechanisms. The answer may require a recognition of higher order dimensionality acting in the specific relationships of functions of portions of matrix.

 

Our hippo key is emerging as a more complex thing, involving a variety of averaged sub-sections. Introspection suggests that we have a hierarchy of keys. The highest key stimulates considerations for our life's goal train. Subordinate to this are numerous sub-goal trains that become the means and structure of our life goal accomplishments. The sub-goals entail an even larger number of problem trains. Each problem train involves a huge number of "frames" of perception, and more or less rote response. Each frame is an enormous temporal pattern of neural interaction, lasting some one-thirtieth of a second.

At first though, it might seem that our hippo installs the highest key now and then, somehow, for some reason or another; and then, even though it is top dog, it puts itself on hold for the vast majority of the time, allowing its subordinate components to take over. They, in turn, transfer use of the key to yet lower order operations, and so on. Some mechanism would support the re-instatement of higher order key stimulation, as nested sub-routines are completed. This way of thinking arises from computer programming. One develops a different kind of outlook, by allowing influence from the literature covering brain structure and function. With the right sort of fundamental neural structure, the needed mechanism can be created largely through learning.

Part of the mystery here can be reduced by recognizing the role of the environment and society in the development of the current state of our overall set of such mechanisms. The environment, including its people, is the primary drive of the mechanism. The mechanism has been shaped as it ran and developed in interaction with all that stimulus. Each such case contributes to the successive condition of it all, as the development of dimensionality. Before the people, there were animals; and before the animals there were cells. Before the cells, there was chemistry. We still have the chemistry, we still have the cells, and we still have animals. As time goes on, the system develops in complexity, as it interacts with itself. The process of interaction is the shaping of surviving complexity, out of surviving relative simplicity. Ultimately, with respect to a humanity of nine dimensional systems, our children, our language, and our organizational accomplishments will have acted to contribute to the completion of the tenth dimension, which, in turn, is involved with all higher dimensionality.

So, the source of modifications for our hippo key is far more complex than the cortex it interacts with. The environment, relative to a human, has become a highly ordered system of influence.

The tentative hippo key has a collection of specialized sub-sections. At times, the hippo will be acting to guide specialized sub-processes entailed within the course of deeply nested problem solving routines. At the other end of the spectrum, the time will come around for handling the highest of higher considerations. These times are defined, and imposed, by opportunities presented by the environment. First we develop more mundane capabilities, which serve as a basis upon which to develop higher capabilities. As the system of self- and environmental- interaction becomes more sophisticated, it becomes capable of being receptive to the higher influences that can work to construct elements of an overall goal train.

These elements are associated with other elements of thought in the same way as all elements of thought and perception are. We do not need a specific mechanism to run the course of key hierarchy involvement. It will naturally take place in reference to the train of thoughts, that take place in reference to the conditions, opportunities, and examples offered by the environment.

We tend to forget how to do things, if they’re new, or they haven’t been done for a while. But we don’t as easily forget how to re-learn how to do them again, because we constantly use our re-learning routines — we often juggle routines in and out of our current-routine-handling system.

The tentative key has an area that is ever presently involved only with the ongoing process of regulated cortical interaction. Other areas of the key, with the same interactive relationship, will, at times, stand to involve utility functions. The overall flow of involvement and non-involvement of utilities may build upon itself to produce greater or lesser moments of higher problem handling procedure. The accomplishments of the system will be limited to possibilities that could be generated by the interaction of the system with the quality of its running environment. I don’t have time to build thousands or millions of variations, and let them compete. This process would assist the technological development of more advanced characteristics, by allowing us to recognize the capabilities produced by a given combination of features. We will have to be satisfied to limit such endeavors, and guide them with as much reason as we can derive from our knowledge of biological success.

 

 

In working with the PC, note that you might want to expend the effort to insure that constant mass is involved with at least the dendrites. If DOS is always changing things, then we’ve lost a potential parallel feature. Logically, it would seem that this shouldn’t matter. It should not affect behavior. Even the human model is more than 99% variable mass. Still, the human model has some constant mass, whatever that is, and it is located quite conspicuously at the heart of the support system of consciousness. Why not include a parallel feature if possible? Then see if it does have any effect on behavior.

 

 

If we can get a PC to behave reasonably, it will make some of the notions of this book more plausible. If a system appears to be conscious, because of its active logic structure, then overall subjective reality might be a logical process in including natural awareness. Within this, time must be simultaneously independent and integrated. Whether you’re a person or a computer, the flow of thought would transfer to a new head, if the data were faithfully transferred. The data is the relative weight-pattern function structure of inter-connective influence within the brain, in context to the history of interaction that was the overall atomic-photonic environment. If a duplicate head were synthesized, it would continue thinking in the exact manner of the original head at the moment of "copy." This means that it would not be surprised and ill-prepared to find that it suddenly exists as a being. It would be surprised that the environment suddenly looks different. It would think that it was the original system, located at a new position in the world. That new position is its new source of time, arising from the mass that is its basis of dimensionality. If the original head were you, the new head would be more of you, while each head would begin a departure of thought from the other. All of our heads are so departed, from eternity.

 

 

We do not require traditional math proofs to validate our acceptance of each other as sentient beings. We act on the assumption that this is true, based on our feelings, as they develop in interactions. You know. You know yourself; and you know that others are that same knowing, of its time, based in a different variety of data/prioritization. The math that is the running system might mean more to you than any series of formulas on paper ever could. A formulas-on-paper approach may be inherently insufficient as a method of dealing with the conscious level of dimensionality. Perhaps the modeling must be active. So, we are beginning to try. This approach shares math’s aim at paralleling aspects of reality from a basis of static conceptualizations, but goes beyond it dimensionally, to do so over time, as active interaction. It is a more apparent, synthesized example of reality as math.

 

We might get so far as to decide that we have made the PC conscious, and that this proves we are all each other. Maybe the whole idea here is just wrong. Maybe it’s right and we just won’t get it. I hope we will find proof, and that it will improve world relations.

 

Life is heaven. This is where your current memory gets to be. You get to be with many variations of yourself. You have the opportunity to appreciate, improve and enjoy yourself here, in many ways.

When you die, eternity takes you to "another" heaven. It will not be better or worse than this one, because it too will only know itself. It may come with greater responsibility. Sometimes your memory there will get larger, and of more detail, but it will pertain to the perceivable realm that is there, just as all of your versions of memory here, pertain to here.

The point communicates with itself to be the eternity you come from, belong to, return to, and re-emerge from, in infinite variety, as us all, here on Earth; and as all of you throughout the Universe... every conscious creature.

contents