banner_text
banner_pic
Thursday, June 10, 2004

SCIENCE…any system of knowledge that is concerned with the physical world and its phenomena and that entails unbiased observations and systematic experimentation. In general, a science involves a pursuit of knowledge covering general truths or the operations of fundamental laws…(Britannica, 2003)spirals_of_life

The history of science from its beginnings in prehistoric times to the 20th century.

On the simplest level, science is knowledge of the world of nature. There are many regularities in nature that mankind has had to recognize for survival since the emergence of Homo sapiens as a species. The Sun and the Moon periodically repeat their movements. Some motions, like the daily “motion” of the Sun, are simple to observe; others, like the annual “motion” of the Sun, are far more difficult. Both motions correlate with important terrestrial events. Day and night provide the basic rhythm of human existence; the seasons determine the migration of animals upon which humans depended for millennia for survival. With the invention of agriculture, the seasons became even more crucial, for failure to recognize the proper time for planting could lead to starvation. Science defined simply as knowledge of natural processes is universal among mankind, and it has existed since the dawn of human existence.

The mere recognition of regularities does not exhaust the full meaning of science, however. In the first place, regularities may be simply constructs of the human mind. Humans leap to conclusions; the mind cannot tolerate chaos, so it constructs regularities even when none objectively exists. Thus, for example, one of the astronomical “laws” of the Middle Ages was that the appearance of comets presaged a great upheaval, as the Norman Conquest of Britain followed the comet of 1066. True regularities must be established by detached examination of data. Science, therefore, must employ a certain degree of skepticism to prevent premature generalization. This is why we present all of these other people and organizations saying the same thing, so that here is our proof …and so on

Regularities, even when expressed mathematically as laws of nature, are not fully satisfactory to everyone. Some insist that genuine understanding demands explanations of the causes of the laws, but it is in the realm of causation that there is the greatest disagreement. Modern quantum mechanics, for example, has given up the quest for causation and today rests only on mathematical description. Modern biology, on the other hand, thrives on causal chains that permit the understanding of physiological and evolutionary processes in terms of the physical activities of entities such as molecules, cells, and organisms. But even if causation and explanation are admitted as necessary, there is little agreement on the kinds of causes that are permissible, or possible, in science. If the history of science is to make any sense whatsoever, it is necessary to deal with the past on its own terms, and the fact is that for most of the history of science natural philosophers appealed to causes that would be summarily rejected by modern scientists. Spiritual and divine forces were accepted as both real and necessary until the end of the 18th century and, in areas such as biology, deep into the 19th century as well.

Certain conventions governed the appeal to God or the gods or to spirits. Gods and spirits, it was held, could not be completely arbitrary in their actions; otherwise the proper response would be propitiation, not rational investigation. But since the deity or deities were themselves rational, or bound by rational principles, it was possible for humans to uncover the rational order of the world. Faith in the ultimate rationality of the creator or governor of the world could actually stimulate original scientific work. Kepler'slaws, Newton's absolute space, and Einstein's rejection of the probabilistic nature of quantum mechanics were all based on theological, not scientific, assumptions. For sensitive interpreters of phenomena, the ultimate intelligibility of nature has seemed to demand some rational guiding spirit. A notable expression of this idea is Einstein's statement that the wonder is not that mankind comprehends the world, but that the world is comprehensible.

Science, then, is to be considered in this article as knowledge of natural regularities that is subjected to some degree of skeptical rigour and explained by rational causes. One final caution is necessary. Nature is known only through the senses, of which sight, touch, and hearing are the dominant ones, and the human notion of reality is skewed toward the objects of these senses. The invention of such instruments as the telescope, the microscope, and the Geiger counter has brought an ever-increasing range of phenomena within the scope of the senses. Thus, scientific knowledge of the world is only partial, and the progress of science follows the ability of humans to make phenomena perceivable.

This Web Site will provides a broad survey of the development of science as a way of studying and understanding the world, from the primitive stage of noting important regularities in nature to the epochal revolution in our notion of what constitutes reality that has occurred in 20th-century physics. Beginning with an explanation of Science as seen through the eyes of "Britannica 2003 "

topofpageTop Of Page
Cause And Effect

First Cause…in philosophy, the self-created being (i.e., God) to which every chain of causes must ultimately go back. The term was used by Greek thinkers and became an underlying assumption in the Judeo-Christian tradition. Many philosophers and theologians in this tradition have formulated an argument for the existence of God by claiming that the world that man observes with his senses must have been brought into being by God as the first cause. The classic Christian formulation of this argument came from the medieval theologian St. Thomas Aquinas, who was influenced by the thought of the ancient Greek philosopher Aristotle. Aquinas argued that the observable order of causation is not self-explanatory. It can only be accounted for by the existence of a first cause; this first cause, however, must not be considered simply as the first in a series of continuing causes, but rather as first cause in the sense of being the cause for the whole series of observable causes.
The 18th-century German philosopher Immanuel Kant rejected the argument from causality because, according to one of his central theses, causality cannot legitimately be applied beyond the realm of possible experience to a transcendent cause.

Protestantism generally has rejected the validity of the first-cause argument; nevertheless, for most Christians it remains an article of faith that God is the first causeof all that exists. The person who conceives of God in this way is apt to look upon the observable world as contingent—i.e., as something that could not exist by itself.

Phases of operations research

Problem formulation

To formulate an operations research problem, a suitable measure of performance must be devised, various possible courses of action defined (that is, controlled variables and the constraints upon them), and relevant uncontrolled variables identified. To devise a measure of performance, objectives are identified and defined, and then quantified. If objectives cannot be quantified or expressed in rigorous (usually mathematical) terms, most operations research techniques cannot be applied. For example, a business manager may have the acquisitive objective of introducing a new product and making it profitable within one year. The identified objective is profit in one year, which is defined as receipts less costs, and would probably be quantified in terms of sales. In the real world, conditions may change with time. Thus, though a given objective is identified at the beginning of the period, change and reformulation are frequently necessary.

Detailed knowledge of how the system under study actually operates and of its environment is essential. Such knowledge is normally acquired through an analysis of the system, a four-step process that involves determining whose needs or desires the organization tries to satisfy; how these are communicated to the organization; how information on needs and desires penetrates the organization; and what action is taken, how it is controlled, and what the time and resource requirements of these actions are. This information can usually be represented graphically in a flowchart, which enables researchers to identify the variables that affect system performance.

Once the objectives, the decision makers, their courses of action, and the uncontrolled variables have been identified and defined, a measure of performance can be developed and selection can be made of a quantitative function of this measure to be used as a criterion for the best solution.

The type of decision criterion that is appropriate to a problem depends on the state of knowledge regarding possible outcomes. Certainty describes a situation in which eachcourse of action is believed to result in one particular outcome. Risk is a situation in which, for each course of action, alternative outcomes are possible, the probabilities of which are known or can be estimated. Uncertainty describes a situation in which, for each course of action, probabilities cannot be assigned to the possible outcomes.

In risk situations, which are the most common in practice, the objective normally is to maximize expected (long-run average) net gain or gross gain for specified costs, or to minimize costs for specified benefits. A business, for example, seeks to maximize expected profits or minimize expected costs. Other objectives, not necessarily related, may be sought; for example, an economic planner may wish to maintain full employment without inflation; or different groups within an organization may have to compromise their differing objectives, as when an army and a navy, for example, must cooperate in matters of defense.

In approaching uncertain situations one may attempt either to maximize the minimum gain or minimize the maximum loss that results from a choice; this is the “minimax” approach. Alternatively, one may weigh the possible outcomes to reflect one's optimism or pessimism and then apply the minimax principle. A third approach, “minimax regret,” attempts to minimize the maximum deviation from the outcome that would have been selected if a state of certainty had existed before the choice had been made.

Each identified variable should be defined in terms of the conditions under which, and research operations by which, questions concerning its value ought to be answered; this includes identifying the scale used in measuring the variable.

topofpageTop Of Page
Model construction

A model is a simplified representation of the real world and, as such, includes only those variables relevant to the problem at hand. A model of freely falling bodies, for example, does not refer to the colour, texture, or shape of the body involved. Furthermore, a model may not include all relevant variables because a small percentage of these may account for most of the phenomenon to be explained. Many of the simplifications used produce some error in predictions derived from the model, but these can often be kept small compared to the magnitude of the improvement in operations that can be extracted from them. Most operations research models are symbolic models because symbols represent properties of the system. The earliest models were physical representations such as model ships, airplanes, tow tanks, and wind tunnels. Physical models are usually fairly easy to construct, but only for relatively simple objects or systems, and are usually difficult to change.

The next step beyond the physical model is the graph, easier to construct and manipulate but more abstract. Since graphic representation of more than three variables is difficult, symbolic models came into use. There is no limit to the number ofvariables that can be included in a symbolic model, and such models are easier to construct and manipulate than physical models.

Symbolic models are completely abstract. When the symbols in a model are defined, the model is given content or meaning. This has important consequences. Symbolic models of systems of very different content often reveal similar structure. Hence, most systems and problems arising in them can be fruitfully classified in terms of relatively few structures. Furthermore, since methods of extracting solutions from models depend only on their structure, some methods can be used to solve a wide variety of problems from a contextual point of view. Finally, a system that has the same structure as another, however different the two may be in content, can be used as a model of the other. Such a model is called an analogue. By use of such models much of what is known about the first system can be applied to the second.

Despite the obvious advantages of symbolic models there are many cases in which physical models are still useful, as in testing physical structures and mechanisms; the same is true for graphic models. Physical and graphic models are frequently used in the preliminary phases of constructing symbolic models of systems.

Operations research models represent the causal relationship between the controlled and uncontrolled variables and system performance; they must therefore be explanatory, not merely descriptive. Only explanatory models can provide the requisite means to manipulate the system to produce desired changes in performance.

Operations research analysis is directed toward establishing cause-and-effect relations. Though experiments with actual operations of all or part of a system are oftenuseful, these are not the only way to analyze cause and effect. There are four patterns ofmodel construction, only two of which involve experimentation: inspection, use of analogues, operational analysis, and operational experiments. They are considered here in order of increasing complexity.

In some cases the system and its problem are relatively simple and can be grasped either by inspection or from discussion with persons familiar with it. In general, only low-level and repetitive operating problems, those in which human behaviour plays a minor role, can be so treated.

When the researcher finds it difficult to represent the structure of a system symbolically,it is sometimes possible to establish a similarity, if not an identity, with another system whose structure is better known and easier to manipulate. It may then be possible to use either the analogous system itself or a symbolic model of it as a model of the problem system. For example, an equation derived from the kinetic theory of gases hasbeen used as a model of the movement of trains between two classification yards. Hydraulic analogues of economies and electronic analogues of automotive traffic have been constructed with which experimentation could be carried out to determine the effects of manipulation of controllable variables. Thus, analogues may be constructed as well as found in existing systems.

In some cases analysis of actual operations of a system may reveal its causal structure. Data on operations are analyzed to yield an explanatory hypothesis, which is tested by analysis of operating data. Such testing may lead to revision of the hypothesis. The cycle is continued until a satisfactory explanatory model is developed.

For example, an analysis of the cars stopping at urban automotive service stations located at intersections of two streets revealed that almost all came from four of the 16 possible routes through the intersection (four ways of entering times four ways of leaving). Examination of the percentage of cars in each route that stopped for service suggested that this percentage was related to the amount of time lost by stopping. Data were then collected on time lost by cars in each route. This revealed a close inverse relationship between the percentage stopping and time lost. But the relationship was not linear; that is, the increases in one were not proportional to increases in the other. It was then found that perceived lost time exceeded actual lost time, and the relationship between the percentage of cars stopping and perceived lost time was close and linear. The hypothesis was systematically tested and verified and amodel constructed that related the number of cars stopping at service stations to the amount of traffic in each route through its intersection and to characteristics of the station that affect the time required to get service.

In situations where it is not possible to isolate the effects of individual variables by analysis of operating data, it may be necessary to resort to operational experiments to determine which variables are relevant and how they affect system performance.

Such is the case, for example, in attempts to quantify the effects of advertising (amount,timing, and media used) upon sales of a consumer product. Advertising by the producer is only one of many controlled and uncontrolled variables affecting sales. Hence, in many cases its effect can only be isolated and measured by controlled experiments in the field.

The same is true in determining how the size, shape, weight, and price of a food product affect its sales. In this case laboratory experiments on samples of consumers can be used in preliminary stages, but field experiments are eventually necessary. Experiments do not yield explanatory theories, however. They can only be used to test explanatory hypotheses formulated before designing the experiment and to suggest additional hypotheses to be tested.

It is sometimes necessary to modify an otherwise acceptable model because it is not possible or practical to find the numerical values of the variables that appear in it. For example, a model to be used in guiding the selection of research projects may contain such variables as “the probability of success of the project,” “expected cost of the project,” and its “expected yield.” But none of these may be calculable with any reliability.

Models not only assist in solving problems but also are useful in formulating them; thatis, models can be used as guides to explore the structure of a problem and to reveal possible courses of action that might otherwise be missed. In many cases the course of action revealed by such application of a model is so obviously superior to previously considered possibilities that justification of its choice is hardly required.

In some cases the model of a problem may be either too complicated or too large to solve. It is frequently possible to divide the model into individually solvable parts and to take the output of one model as an input to another. Since the models are likely to be interdependent, several repetitions of this process may be necessary.

topofpageTop Of Page
The history of biology

There are moments in the history of all sciences when remarkable progress is made in relatively short periods of time. Such leaps in knowledge result in great part from two factors: one is the presence of a creative mind—a mind sufficiently perceptive and original to discard hitherto accepted ideas and formulate new hypotheses; the second is the technological ability to test the hypotheses by appropriate experiments. The most original and inquiring mind is severely limited without the proper tools to conduct an investigation; conversely, the most sophisticated technological equipment cannot of itself yield insights into any scientific process.

An example of the relationship between these two factors was the discovery of the cell. For hundreds of years there had been speculation concerning the basic structure of both plants and animals. Not until optical instruments were sufficiently developed to reveal cells, however, was it possible to formulate a general hypothesis, the cell theory, that satisfactorily explained how plants and animals are organized. Similarly, the significance of Gregor Mendel's studies on the mode of inheritance in the garden pea remained neglected for many years, until technological advances made possible the discovery of the chromosomes and the part they play in cell division and heredity. Moreover, as a result of the relatively recent development of extremely sophisticated instruments, such as the electron microscope and the ultracentrifuge, biology has moved from being a largely descriptive science—one concerned with entire cells and organisms—to a discipline that increasingly emphasizes the subcellular and molecular aspects of organisms and attempts to equate structure with function at all levels of biological organization.

In technology, the development and use of precise measuring equipment. Although thesensory organs of the human body can be extremely sensitive and responsive, modernscience and technology rely on the development of much more precise measuring and analytical tools for studying, monitoring, or controlling all kinds of phenomena.

Some of the earliest instruments of measurement were used in astronomy and navigation. The armillary sphere, the oldest known astronomical instrument, consistedessentially of a skeletal celestial globe whose rings represent the great circles of the heavens. The armillary sphere was known in ancient China; the ancient Greeks were also familiar with it and modified it to produce the astrolabe, which could tell the time orlength of day or night as well as measure solar and lunar altitudes. The compass, the earliest instrument for direction finding that did not make reference to the stars, was a striking advance in instrumentation made about the 11th century. The telescope, the primary astronomical instrument, was invented about 1608 by the Dutch optician Hans Lippershey and first used extensively by Galileo.

Instrumentation involves both measurement and control functions. An early instrumental control system was the thermostatic furnace developed by the Dutch inventor Cornelius Drebbel (1572–1634), in which a thermometer controlled the temperature of a furnace by a system of rods and levers. Devices to measure and regulate steam pressure inside a boiler appeared at about the same time. In 1788 the Scotsman James Watt invented a centrifugal governor to maintain the speed of a steam engine at a predetermined rate.

Instrumentation developed at a rapid pace in the Industrial Revolution of the 18th and 19th centuries, particularly in the areas of dimensional measurement, electrical measurement, and physical analysis. Manufacturing processes of the time required instruments capable of achieving new standards of linear precision, met in part by the screw micrometer, special models of which could attain a precision of 0.000025 mm (0.000001 inch). The industrial application of electricity required instruments to measure current, voltage, and resistance. Analytical methods, using such instruments as the microscope and the spectroscope, became increasingly important; the latter instrument, which analyzes by wave length the light radiation given off by incandescent substances, began to be used to identify the composition of chemical substances and stars.

In the 20th century the growth of modern industry, the introduction of computerization, and the advent of space exploration have spurred still greater development of instrumentation, particularly of electronic devices. Often a transducer, an instrument that changes energy from one form into another (such as the photocell, thermocouple, or microphone) is used to transform a sample of the energy to be measured into electrical impulses that are more easily processed and stored. The introduction of the electronic computer in the 1950s, with its great capacity for information processing and storage, virtually revolutionized methods of instrumentation, for it allowed the simultaneous comparison and analysis of large amounts of information. At much the same time, feedback systems were perfected in which data from instruments monitoring stages of a process are instantaneously evaluated and used to adjust parameters affecting the process. Feedback systems are crucial to the operation of automated processes.

Most manufacturing processes rely on instrumentation for monitoring chemical, physical, and environmental properties, as well as the performance of production lines.Instruments to monitor chemical properties include the refractometer, infrared analyzers, chromatographs, and pH sensors. A refractometer measures the bending ofa beam of light as it passes from one material to another; such instruments are used, for instance, to determine the composition of sugar solutions or the concentration of tomato paste in catsup. Infrared analyzers can identify substances by the wavelength and amount of infrared radiation that they emit or reflect. Chromatography, a sensitive and swift method of chemical analysis used on extremely tiny samples of a substance, relies on the different rates at which a material will adsorb different types of molecules. The acidity or alkalinity of a solution can be measured by pH sensors.

Instruments are also used to measure physical properties of a substance, such as its turbidity, or amount of particulate matter in a solution. Water purification and petroleum-refining processes are monitored by a turbidimeter, which measures how much light of one particular wavelength is absorbed by a solution. The density of a liquid substance is determined by a hydrometer, which measures the buoyancy of an object of known volume immersed in the fluid to be measured. The flow rate of a substance is measured by a turbine flowmeter, in which the revolutions of a freely spinning turbine immersed in a fluid are measured, while the viscosity of a fluid is measured by a number of techniques, including how much it dampens the oscillations of a steel blade.

Instruments used in medicine and biomedical research are just as varied as those in industry. Relatively simple medical instruments measure temperature, blood pressure (sphygmomanometer), or lung capacity (spirometer). More complex instruments include the familiar X-ray machines and electroencephalographs and electrocardiographs, which detect electrical signals generated by the brain and heart, respectively. Two of the most complex medical instruments now in use are the CAT (computerized axial tomography) and NMR (nuclear magnetic resonance) scanners, which can visualize body parts in three dimensions. The analysis of tissue samples using highly sophisticated methods of chemical analysis is also important in biomedical research.

Armillary sphere from Thomas Blundeville's Plaine Treatise . . . of Cosmographie, 1594…Early astronomical device for representing the great circles of the heavens, including in the most elaborate instruments the horizon, meridian, Equator, tropics, polar circles, and an ecliptic hoop. The sphere is a skeleton celestial globe, with circles divided into degrees for angular measurement. In the 17th and 18th centuries such models—either suspended, rested on a stand, or affixed to a handle—were used to show the difference between the Ptolemaic theory of a central Earthand the Copernican theory of a central Sun.

The earliest known complete armillary sphere with nine circles is believed to have been the metexroskopion of the Alexandrine Greeks (c. AD 140), but earlier and simpler types of ring instruments were also in general use. Ptolemy, in the Almagest, enumerates at least three. It is stated that Hipparchus (146–127 BC) used a sphere of four rings; and in Ptolemy's instrument, the astrolabon, there were diametrically disposed tubes upon the graduated circles, the instrument being kept vertical by a plumb line.

The Arabs employed similar instruments with diametric sight rules, or alidades, and it is likely that those made and used in the 12th century by Moors in Spain were the prototypes

Erick my/our son was having what I knew to be a spiritual/scientific experience a psychic opening which the psychic abilities known as clairvoyance, clairaudience and was diagnosed as Schizophrenic Bipolar 1 ect. The problem was that was and is caused by a grave error known as not enough information to make their diagnoses and uses of such protocols as drugs to be the cure. As a matter of fact it is the drugs alters/changes the states of mind/consciousness. We are suffering from chemical and electrical imbalances within the environment and all life forms including hue/man. This spiritual happening can no longer be horrible mistaken for a mental illness by The World Health Organization, nor The American medical and psychiatric association, and worst of all the drug-pharmaceutical organizations and such government agencies as Fema, the FDA, the Justice Department-Law enforcement, The Education Department and society as a whole.

Erick was not crazy-mentally ill; he was having a spiritual experience, which has been mistaken for a disease, the ability to hear voices and thoughts of others in his head/mind, and ability that is scientifically known as Clairvoyance, Clairaudience, and Mental Telepathy ECT. He just happened to know “things” he was awakened to some awful truths that he himself was experiencing, “things” that are taking place between the government and young people, the children and young adults right here in New York City. These things involved the children who were being labeled mentally ill and the ones who were dying by accident, suicide, police and school shootings, kidnappings-missing-disappearing, murder and so on because they have scientifically been able to click their Amygdala, the human “fly back switch” that takes you from limited consciousness to full consciousness. Where your reptile brain merges with the mammal brain, the union of the Yin and Yang.

There was not one available place here in New York State nor was there one globally available no where to go too no place where we could have taken Erick when he began complaining about the voices in his head. Voices who actually told him, that he had to die by committing suicide. Why...he was not wanted here anymore, was a mystery to me, he could not stay here any more and he had to die. They said to Erick that there was a no place to run or hide...they would find him. I could not believe that there was no one individual or group who could offered any help to Erick and my family unit. This was so mostly because this kind of behavior was considered a mental illness, Bipolar 1 and Schizophrenia or either related to his particular situation as a mentall illness or drugs. Although Drugs and Drug addiction are major players in this mental illness diagnoses Erick was dosed with LSD his was not about drugs and drug addiction. I have found that there is great need for a place/s to accommodate, heal and teach the children, the young adults and the family members who are having the same experiences where Mental illness is understood for what it is A spiritual/scientific experience. I ask that the American medical and psychiatric members to get involved because their physical diagnose of a spiritual situation has to be reconsidered. Reconsidered in such a way as to include all of the brain and brain cells rather than that part of the brain, the reptile brain which house the physical mind and emotions, while they ignore the spiritual/scientific brain part known as the Frontal Lobes/primate brain part. There is much work to be done to show why my son Erick was not mentally diseased nor is he dead . To do so we are willing to educate the world on the linguistics of Life and Death in the creators universe and the Universal Game of Polarity Soul Integration via Dimensional Ascension and The Physical Universe as it is. A Holographic Universe, A Sacred Holographic Projection a Virtual Reality Universe made up of vibration, sound and light,

'"The kingdom of heaven is at hand cure the sick, raise the dead, cleanse lepers, and drive out demons". The Resurrected Dead......Are Living Among Us!'" … Jesus: "I tell you truly, none can be happy, except he do the Law." The Law...Follow the laws of your Mother..."For no man can serve two masters. For either he serves Beelzebub and his devils or else he serves our Earthly Mother and her angels. Either he serves death or he serves life. I ten you truly, happy are those that do the laws of life and wander not upon the paths of death. For in them the forces of life wax strong and they escape the plagues of death." ...Jesus

topofpageTop Of Page

L.O.V.E. Limitless Oscillating Vibrating Energy

THE LAW... *Universal Sympathetic Vibration... Sympathetic Association LOVE is THE UNIVERSAL LAW of SYMPATHETIC VIBRATION LOVE…is the LAW that binds individuals together LOVE…is the LAW that binds Molecules together LOVE… is the LAW that whenever Broken causes chaos, discord and destruction LOVE …is the LAW that when adhered too, brings PEACE, HARMONY and UNDERSTANDING into LIFE and all LIFE’s Activities

Practicing the law

1. Separating from the chaotic conditions of the mass of mankind which refuses to obey natural and cosmic law.
2. Demonstrating a practical social system based on natural and cosmic law.
3. Communicating these ideas to the outside world through teaching, healing and helping others according to their needs.
4. Inviting all communities and other individuals who are sufficiently evolved to be willing to cooperate with the law.




We do not see things as They are
We see things as We are
-Talmud



Support Us With A Donation



Home   -   About Us   -   Links  -   Law  -   Mental Illness  -   Science  -   Newsletter  -   Products   -   Get Involved  -   Contact Us
Back to Home

Thank You For Visiting Our Website