- Explore GCP
We begin the description of the Experiment with a short list of links to particular aspects. This material is also found in later sections of this page, but this is a convenient menu of important items.
The GCP recorded its first data on August 4, 1998. Beginning with a few random sources, the network grew to about 10 instruments by the beginning of 1999, and to 28 by 2000. It has continued to grow, stabilizing at roughly 60 to 65 eggs by 2004.
The early experiment simply asked whether the network was affected when powerful events caused large numbers of people to pay attention to the same thing. This experiment was based on a hypothesis registry specifying a priori for each event a period of time and an analysis method to examine the data for changes in statistical measures. Various other modes of analysis including attempts to find general correlations of GCP statistics with other longitudinal variables have been considered, and continue to be developed.
In the most general sense, the purpose of the project was and is to create and document a consistent database of parallel streams of random numbers generated by high-quality physical sources. The goal is to determine whether any correlations might be detectable of statistics from these data with independent long-term physical or sociological variables. In the original experimental design we asked the more limited question whether there is a detectable correlation of deviations from randomness with the occurrence of major events in the world.
Periods of collective attention or emotion in widely distributed populations will correlate with deviations from expectation in a global network of physical random number generators.
The formal hypothesis of the original event-based experiment is very broad. It posits that engaging global events will correlate with deviations in the data. We use "operational definitions" to establish unambigously what is done in the experiment. The identification of events and the times at which they occur are specified case by case, as are the statistical recipes. The approach explicitly preserves some latitude of choice, as is appropriate for an experiment exploring new territory. Accepting loose criteria for event identification allows exploration of a variety of categories, while the specification of a rigorous, simple hypothesis test for each event in the formal series assures valid statistics. These are combined to yield a confidence level for the composite of all formal trials. This "bottom line" constitutes a general test of the broadly defined formal hypothesis, and characterizes a well-understood database for further analysis.
For a more up to date discussion of formal analysis, see The GCP Event Experiment by Bancel and Nelson, 2008, and Exploring Global Consciousness by Nelson and Bancel, 2010 (in press, actually -- so the link will be dead for a while).
The formal events are fully specified in a hypothesis registry. Over the years, several different analysis recipes were invoked, though most analyses specify the "network variance" (Squared Stouffer Z). A few specify the "device variance", which is the inter-RNG variance (Sum of Z^2). After the first few months, during which several statistical recipes were tried, the network variance (netvar) became the "standard method" which was adopted for almost all events in the formal series. The event-based experiment thus has explored several potentially useful analyses, but has focused primarily on the netvar.
The event statistics usually are calculated at the trial level -- 1 second -- though other blocking is possible. The trial statistics are combined across the total time of the event to yield the formal result. The results table has links to details of the analyses, typically including a "cumulative deviation" graph tracing the history of the second-by-second deviations during the event, leading to the terminal value which is the test statistic. The following table shows the precise algorithms for the basic statistics used in the analyses.
It is possible to generate various kinds of controls, including matched analysis with a time offset in the actual database, or matched analysis using a pseudorandom clone database. However, the most general control analysis is achieved by comparisons with the empirical distributions of the test statistics. The event data comprise less than 2% of the whole database, and the non-event data can be used for resampling to produce a distribution of "control" events with the parameters of the formal events, but random start times. These provide a rigorous control background and confirm the analytical results for the formal series of hypothesis tests. See the figure below, created by Peter Bancel using a reduced dataset beginning December 1998 and ending December 2009, which compares the cumulative formal result against a background of 500 resampled controls.
Over the 12 years since the inception of the project, over 325 replications of the basic hypothesis test have been accumulated. The composite result is a statistically significant departure from expectation of roughly 6 standard deviations as of late 2010. This strongly supports the formal hypothesis, but more important, it provides a sound basis for deeper analysis using refined methods to re-examine the original findings and extend them using other methods. These potentials are developed in recent papers, including The GCP Event Experiment by Bancel and Nelson, 2008. The full formal dataset is shown in the next figure, where it is compared with a background of simulated pseudo-event sequences by drawing random Z-scores from the (0,1) normal distribution. As in the resampling case, it is obvious that the real data are from a different population. Note, however, that it takes a few dozen events to reach a point where the real score accumulation is clearly distinguishable from the simulations.
The focus of our effort turns now to a more comprehensive program of rigorous analyses and incisive questions intended to characterize the data more fully and to facilitate the identification of any non-random structure. We begin with thorough documentation of the analytical and methodological background for the main result, to provide a basis for new hypotheses and experiments. The goal is to increase both the depth and breadth of our assessments, to develop models that can help distinguish classes of potential explanations. Essentially, we are looking for good tools that will give us a better understanding of the data deviations.
A variety of analyses have been undertaken to establish the quality of the data and characterize the output of individual devices and the network as a whole. The first stage is a careful search for any data that are problematic because of equipment failure or other mishap. Such data are removed. With all bad data removed, each individual REG or RNG can be characterized to provide empirical estimates for statistical parameters. This also allows a shift of analytical emphasis from the events to trial-level data in order to extract more structural information from the database. The approach is to convert the database into a normalized, completely reliable data resource that facilitates rigorous analysis. The trial-level data allow a richer assessment of the multi-year database using sophisticated statistical and mathematical techniques. We can use a broader range of statistical tools to look for small but reliable changes from expected random distributions that may be correlated with natural or human-generated variables.
Ideally, the trials recorded from the REGs follow the binomial [200, 0.5] distribution, with expected mean 100, variance 50. However, although they all are high-quality random sources, perfect theoretical performance is not expected for these real-world devices. A logical XOR of the raw bit-stream with a fixed pattern of bits with exactly 0.5 probability compensates mean biases of the regs. After XOR'ing, the mean is guaranteed over the long run to fit theoretical expectation. The trial variances remain biased, however. The biases are small (about 1 part in 10,000) and generally stable on long timescales. We treat them as real, albeit tiny biases that need to be corrected by normalization for rigorous analysis. They are corrected by converting the trialsums for each individual egg to standard normal variables (z-scores), based on the emprirical standard deviations.
The normalized and standardized data resource allows us to to a rigorous re-analysis of the experiment. The result is little different from the original analysis, but provides confidence in the foundation for new analytical investigations. These include the development of orthogonal, independent measures of structure in the event data, and examination of questions of temporal and spatial structure implicit in the general hypothesis. A recent (2008) assessment is detailed in The GCP Event Experiment by Bancel and Nelson, Journal of Scientific Exploration, March 2008.