I've been working to hook up two old Grass machines. I found the plugs, and was looking for the cable when Dave suggested this company called Markertech. Joel at Marketek suggested sending the specs to penny_at_towerpower.com, their custom cable division.
If you already use Hotmail, you should consider reading and sending mail using Outlook Express. The integration is quite tight. You can also import your address list from hotmail, though, with multiple accounts, it could start getting a bit messy.
Setup Hotmail on Outlook Express
How much gold is in a human body? Recently, Ahnlide et al. (Malmo University Hospital, Sweden) found the concentration of gold in blood to be <0.04-0.15 μg/L using inductively coupled plasma mass spectrometry  for those without dental gold. For those with dental gold the concentration ranged from 0.04-1.07 μg/L. The detection limit was 0.04 μg/L. The average human adult has approximately 5 liters of blood. One might, however, expect gold to be taken up into tissues such as the liver. Indeed, in 1962, Parr and Taylor, demonstrating the determination of gold in biological materials via thermal neutron activation analysis, showed the gold concentration in human liver to range between 13 and 790 μgμ/g wet tissue, with a median value of 57 μμg/g and a mean of 114 μμg . The CRC Handbook of Toxicology places the mass of the human liver at 2.3 g/100 g body weight . Using a 70 kg human gives us 1610 g, which gives us approximately 0.02 μg of gold (using the mean value). So, a lot more gold hangs around in blood then in the liver. Actually, in the 50s it was quite popular to subscribe to the alchemical properties of gold. In this case, the subjects (22 males and 10 females), ages ranging from a few hours after birth to 80 years, had never been treated with drugs containing gold. The liver samples were taken post-mortem. Those interested in the toxicology of gold should do a search on "gold" at Amazon.com on Patty's Toxicology, Tox Issues Related to Metals/Neurotoxicology and Radiation/Metals and Metal Compounds .
 I. Ahnlide, C. Ahlgren, B. Bjorkner, M. Bruze, T. Lundh, H. Moller, K. Nilner and A. Schutz, "Gold concentration in blood in relation to the number of gold restorations and contact allergy to gold," Acta Odontol.Scand., vol. 60, pp. 301-305, Oct. 2002.
 L. Lee, "Volume of Blood in a Human," in The Physics Factbook, G. Elert, Ed., [Online document], 2001, [cited 18 Dec 2003]. Available: http://hypertextbook.com/facts/1998/LanNaLee.shtml.
 R.M. Parr and D.M. Taylor, "The Determination of Gold in Human Liver by Thermal Neutron Activation Analysis," Phys. Med. Biol., vol. 8, pp. 43-50, 1962. Available: http://ej.iop.org/links/q55/NHcvircgzwOWMFq9k4yEVg/pbv8i1p43.pdf.
 M.J. Derelanko, "Risk Assessment," in CRC handbook of toxicology, M.J. Derelanko and M.A. Hollinger Eds. Boca Raton: CRC Press, 1995, pp. 645.
 D.R. Juberg and F.T. Hearne, "Silver and Gold," in Patty's Toxicology, 5th ed., E. Bingham, B. Cohrssen and C.H. Powell Eds. New York: Wiley, 2001,
What are good single-trial experiments to test out dry sensors? Ideally, the experiments should be generally accepted, i.e., published in a peer-reviewed journal. The idea would be to repeat the experiment of interest comparing performance for dry versus wet electrodes using the published method of analysis. A single-trial scenario is important because we want to be able to claim that it isn't that noise from the sensors is not being averaged away. These sorts of experiments can come in two flavors (sort of). They can be focused on the so-called Brain Computer Interface (BCI) or on classification of some sort. In reality, BCI is a subset of classification experiments, where the practicality of entering information into a system has been considered. These mostly have to do with recognizing the readiness potential prior to hand or finger movement. An example of a non-BCI single-trial classification exercise would be the single-trial classification of stimuli such as words or sentences. A search on PubMed on "single trial EEG" brought up 277 references, none of which were available online. IEEE Xplore offers full-text articles, so I went there next, seraching for the same phrase.
Bereitschaftspotential (BP) is German for "readiness potential". It is a negative potential detectable via EEG occuring several hundred milliseconds before movement. As I mentioned earlier, a group at Fraunhofer-FIRST has been working on the classification of BP in single-trial recordings. In , Blankertz et al. demonstrated both response-aligned classification (~97%) and a pseudo-online classification using two classifiers. In this discussion, I will focus on their response-aligned classification which was essentially a two class problem (left hand versus right).
 B. Blankertz, G. Curio and K. Müller, "Classifying Single Trial EEG: Towards Brain Computer Interfacing," in Advances in Neural Information Processing Systems, vol. 14, T.G. Dietterich, S. Becker and Z. Ghahramani Eds. Cambridge, MA: MIT Press, 2002, pp. 157-164.
I started looking into EEG-measured habituation because of a comment that Pat had made on my thesis. I had quite casually written something about "waning interest and mental fatigue" limiting how long brain recordings can go for. Pat wanted a more scientific description, hopefully something about the brain's decay in terms of repeated stimuli. The correct term for this phenomenon is 'habituation'. As expected, there was a substantial body of literature on this topic.
The Economist had an article which is related to my thesis! It concerns the research of Prof. Klaus-Robert Müller at Franhofer Institute for Computer Architecture and Software Technology (FIRST). The article goes over some of the recent successes the group has had on single trial classification. Some encouragement (and impetus) for me! Incidentally, the classification is based on Bereitschaftspotential (readiness potential), a negative potential in EEG preceding movement. This is different from the type of BCI that we want to do in our lab, which is language based.
Ken introduced me to LIBSVM, a Support Vector Machine library from NTU. Good? We will see.
I want to know about habituation and EEG. Who did the first studies, what does it show? Basically habituation concerns the decay in EEG amplitude with repeated stimuli. What are the timescales involved? I'm just starting to make some headway. With adults there are many factors involved, for instance, we could start thinking about something else, or "try" to ignore the stimuli... for infants, it is a bit less obvious. Given this, here's an interesting quote from Regan  (the ABR is the Auditory Brainstem Response).
Compared with other EPs, the ABR is notably resistant to fatigue or habituation. The delivery of over 100,000 stimuli in 28 consecutive averages to a newborn infant produced no evident change in the response and in another study the ABR was not altered by 20 min of continuous stimulation.Unfortunately, Regan gives only references for the two experiments, but nothing on habituation from the other EPs. Natus is a company that went big time doing ABRs on infants. Go Bill New!
 D. Regan, Human brain electrophysiology : evoked potentials and evoked magnetic fields in science and medicine, New York: Elsevier, 1989, p.266.
I'm not sure what happen, but my Adobe Acrobat Reader 6.0 has completely stopped working. I've tried uninstalling and reinstalling, and everytime it just crashes, hangs, freezes up on entry. The last thing I remember doing is trying to print a PDF from a scientific journal. What gives? It's driving me mad. For now, I've stopped using it though I still, from time to time, accidentally click on a PDF link, and then BAM, computer seizes up and I have to invoke the Task Bar to kill the offending AcroRd32.exe which continuously consumes something like 40-90% of the CPU. I am not the first to have the problems. A run through the reviews on download.com and zdnet will show you it isn't just me. It's not, really... arrrgghhhh!!!! The most annoying thing is that I really need it to read scientific journals...
Could this be a clue? Go Google! I checked C:\Documents and Settings\torque\Local Settings\Temp, sure enough, Acrxxxx.tmp galore.... ending with AcrFFFF.tmp, each exactly 0 bytes.
In fact, it worked! So, there's the solution. Thank you mary_at_mindspring.com.C:\Documents and Settings\torque\Local Settings\Temp>erase acr*.tmp
A lot of you wrote to thank me for this post. You are welcome - it was driving me pretty crazy. :) If you want to help the cause, put a link to this post or the blog. Spread the word and stop the madness. Cheers, -T
In case you were wondering... as we were... if you haven't seen the movie, don't click here.
Just keep swimming, swimming, what do we do? We swim... swim.. ... but I actually, I'm Crush. You try!
The standard reference on truncated SVD is Hansen's The truncated SVD as a method for regularization . I've been using it on the math association experiment with some success, though nothing as good as Ken's Tikhonov regularization. Now, where can I get a copy of BIT? Ahh, here's the man: Per Christian Hansen.
 P.C. Hansen, The truncated SVD as a method for regularization, BIT, 27 (1987) 534--553.
Interviewing for a quantitative finance firm... better brush up on your statistics.
I left off yesterday wondering about sources of EEG - actually, I read a few papers. I'll try to summarize what I found. Essentially, the accepted description is that the EEG is a result of extracellular currents, i.e., the ions which flow following neuronal discharge.
To be precise, let me quote from Ebersole and Pedley's "Current Practice of Clinical Electroencephalography". In the first chapter, Buzsáki, Traub and Pedley write:
Membrane currents generated by neurons pass through the extracellular space. These currents can be measured by electrodes placed outside the neurons. The field potential (i.e., local mean field), recorded at any given site, reflects the linear sum of numerous overlapping fields generated by current sources (current from the intracellular space to the extracellular space) and sinks (current from the extracellular space to the intracellular space) distributed along multiple cells. This macroscopic state variable can be recorded with electrodes as a field potential or EEG or with magnetosensors (superconducting interference devices [SQUIDs] ) as a MEG. These local field patterns therefore provide experimental access to the spatiotemporal activity of afferent, associational, and local operations in a given neural structure. To date, field potential measurements provide the best experimental and clinical tool for assessing cooperative neuronal activity at high temporal resolution. However, without a mechanistic description of the underlying neuronal processes, scalp or depth EEG is simply a gross correlate of brain activity rather than a predictive descriptor of the specific funcional and anatomical events. The essential experimental tools for the exploration of EEG generation have yet to be developed. [1, p. 1]
A straightforward approach to decomose the surface (scalp) recorded event is to study electrical activity simultaneously on the surface and at the sites of the extraccellular current generation. Electrical recording from deep brain structures by means of wire electrodes is one of the oldest recording methods in neuroscience. Local field potential measurements, or "micro-EEG", combined with recording of neuronal discharges is the best experimental tool available for studying the influence of cytoarchitectural properties, such as cortical lamination, distribution, size, and netowkr connectivity of neural elements on electrogenesis. However, a large number of observation points combined with decreased distance between the recording sites are required for high spaital resolution and for enabling interpretation of the underlying cellular events. Progress in this field should be accelerated by the availability of micromachine silicon-based probes with numerous recording sites. Information obtained from the depths of the brain will then help clinicians interpret the surface-recorded events.Buzsáki et al. go on to describe various sources of extracellular current flow. What seems to be missing is some discussion on how the fields then affect all these other things. They list the following sources:
In principle, every event associated with membrane potential changes of individual cells (neurons and glia) should contribute to the perpetual voltage variability of the extracellular space. Until recently, synaptic activity was viewed as the exclusive source of extracellular current flow or EEG potential. Progress during the 1990s revealed numerous sources of relatively slow membrane potential fluctuations, not directly associated with synaptic activity. Such non-synaptic events may also contribute significantly to the generation of local field potentials. These events include calcium spikes, voltage-dependent oscillations, and spike afterpotentials observed in various neurons. 
 G. Buzáki, R.D. Traub and T.A. Pedley, "The Cellular Basis of EEG Activity," in Current Practice of Clinical Electroencephalography, 3rd ed., J.S. Ebersole and T.A. Pedley Eds. Philadelphia: Lippincott Williams & Wilkins, 2003, pp. 1-11.
A book that I wanted for my thesis is out of print and expensive ($68-150) on alibris. It made me wonder if it was out of copyright law yet. UPenn has an excellent site on how to see if a book can go online. It has instructions on searching the Library of Congress database for copyright renewals. Cool! My book, unfortunately, had its copyright renewed in 1989.
RE-447-729 (COHM) ITEM 1 OF 1 IN SET 1 TITL: A History of the electrical activity of the brain; the first half-century. By Mary A. B. Brazier. CLNA: acMary A. B. Brazier (A) DREG: 20Nov89 ODAT: 17Oct61; OREG: AI-7489. OCLS: A LINM: NM: all new except quotations and ill. cited.
In non-invasive EEG, we measure tiny fluctuations in electrical potential at the scalp. What causes these fluctuations?
I found some information on the strength of MEG in The Encyclopaedia of Medical Imaging Vol 1: Physics, Techniques and Procedures by Gustav K. Von Schulthness. I'll need to get the reference when it shows up on the Library of Congress database. Here is the useful quote:
Neuromagnetic fields have amplitudes in the order of a few picotesla ( tesla) and very sensitive instruments are needed to detect these extremely weak fields.
Ahh, here's a better one, from Jasper Daube's "Clinical Neurophysiology" .
Magnetoencephalography (MEG) is the recording of the small magnetic fields produced by the electric activity of neurons in the brain. These magnetic fields are generated by current flowing in neurons, with a small contribution from extracellular current flow in the volume conducting medium around the brain (generally less than the contribution of intracellular currents). These magnetic fields are extremely small, typically in the femptotesla or picotesla range ( to T). They must be detected by a magnetic gradiometer connected to a special type of extremely sensitive amplifier called a superconducting quantum interference device (SQUID), which must be cooled by liquid helium. To eliminate noise signals caused by the much larger magnetic fields associated with electrical equipment, power lines, and the earth's magnetic field, a special magnetically shielded room is required. For all these reasons, MEG is a very expensive tool. Another disadvantage of MEG, compared with EEG, is that it cannot be used readily for the long-term recordings needed to capture and to localize an epileptic seizure, because the subject's head must be kept immobilized near the magnetic gradiometer array during the entire recording...Hmm, the comment about extracellular flow versus current flowing in the neurons is interesting. I thought the typical claim is that EEG is mostly due to the extracellular current flow.
...Because magnetic fields created by a current source are always oriented along a tangent to a circle around the line of current flow, MEG is insensitive to radially oriented currents in cerebral cortex and is sensitive only to tangential currents, in contrast to EEG, which is sensitive to both (although more sensitive to radial than to tangential currents). Thus, in practice, MEG recordings are often combined with simultaneous conventional EEG recordings.
 J.R. Daube, Clinical neurophysiology, Oxford ; New York: Oxford University Press, 2002, p. 144.
As part of my thesis, I have a short excerpt on EEG instrumentation history. I cited, for example, Hans Berger's original work using his own paper. However, Pat had a really good point, what I should cite is not Berger's paper but a paper or book which says that Hans Berger first reported human EEG in 1929. This has led me on a search for a definitive historical account of what happened. It appears to be Mary Brazier's "A history of the electrical activity of the brain; the first half-century" .
Fortunately, the book can be found in Lane Medical Library (lookup "Brazier history"). Unfortunately, it is checked out till the end of the year.
Mary Brazier (1961) has described the work of the German psychiatrist Hans Berger as the triumph of a man working with equipment that was inadequate even by the standards of his day. Like Caton, Berger attempted to record electrical responses to sensory stimuli in animals, although it seems that the work he did between 1902 and 1910 was in general unsuccessful. In 1924 he turned to the measurement of human electrical potentials but delayed publication of his results until 1929, when the first recorded electroencephalogram (of his young son) appeared in Archiv forschung Psychiatrie.Another useful source of information is the section entitled "Electrophysiological Recordings" in Stanley Finger's "Origin of Neuroscience: A History of Explorations into Brain Function" . Some useful excerpts appear below:
Berger discovered alpha rhythm, running at 10 cycles per second (hertz or Hz). He found that this disappeared if the eyes were opened, with mental effort such as doing mental arithmetic (with the eyes closed) and with loud noises or painful stimuli. Berger's work was disregarded by physiologists partly ecause it was published in psychiatric journals, and perhaps because of his reputation for eccentricity, seclusiveness and his outstanding belief in psychic phenomena such as telepathy. Only after his work was replicated by Adrian and Matthews in Cambridge did he get the credit he deserved for laying the foundations of human electroencephalography. Brazier's (1961) history of the EEG cannot be recommended too highly for readers interested in a full and authoritative account of the early days. 
Richard Caton (1842-1926) was probably the first to record the spontaneous electrical activity of the brain.I found some more useful comments on instrumentation in Hobson , e.g.,
Caton's first report was a presentation before the British Medical Association in Edinburgh in July 1875 and was summarized in the British Medical Journal later that year. Caton told his audience that the electrical changes taking place in the brain varied in location with the specific peripheral stumli he was using:In every brain hitherto examined, the galvanometer has indicated the existence of electric currents. The external surface of the grey matter is usually positive in relation to the surface of a section through it. Feeble currents of varying direction pass through the multiplier when the electrodes are placed on two points of the external surface of the skull. The electric currents of the grey matter appear to have a relation to its functions. When any part of the grey matter is in a state of functional activity, its electric current usually exhibits negative variation. For example, on the areas shown by Dr. Ferrier to be related to rotation of the head and to mastication, negative variation of the current was observed to occur whenever those two acts respectively were performed. Impressions through the senses were found to influence the currents of certain areas, e.g., the currents of that part of the rabbit's brain which Dr. Ferrier has shown to be related to movements of the eyelids, were found to be markedly influcenced by stimulation of the opposite retina by light. (1875, 278)Electrophysiological recordings became much more fashionable after Hans Berger (1873-1941) published his electroencephalograph (EEG) work on humans in 1929. Unlike almost everyone else, Berger cited Caton's valuable contribution to the field. In 1929, he wrote:Caton has already (1874) published experiments on the brains of dogs and apes in which bare unipolar electrodes were placed either on the cereral cortex and the other on the surface of the skull. The currents were measured by a sensitive galvanometer. There were found distinct variations in current, which increased during sleep and with the onset of death strengthened, and after death became weaker and then completely disappeared. Caton could show that strong current variations resulted in brain from light shone into the eyes, and he speaks already of the conjecture that under the circumstances these cortical currents could be applied to localization within the cortex of the brain. (Translated by Cohen, 1959, 258) 
Hans Berger used Einthoven's string galvanometer to record the electrical activity of the brain in human subjects .and
The rest of the world began to fall in line after 1933 when two English physiologists, Edgar Adrian and Brain Matthews, were able to confirm Berger's observation by recording their own brainwaves using their cathode-ray oscilloscope. This instrument substituted an electron beam for the lightweight string in Einthoven's galvanomter. Adrian and Matthews took advantage of the virtually weightless state of electrons. When a beam of electrons was moved back and forth across a phosphorescent screen, the external voltages were visualized as deviations of the beam's path. These distinguished English investigators gained for Berger's discovery the scientific support it needed. Ok, that last reference was a bit casual... but the comments about the cathode-ray oscilloscope are useful. I suspect that all this comes from Brazier's text, but confirming that will have to wait. Searching for more information on the string gavanomter I discovered Robert Bud's "Instruments of Science: An Historical Encyclopedia" . It gives a short but insightful treatment on EEG instrumentation.
The EEG evolved from research among physiologists in the 1800s on the electrical properties of animals. In 1875, Richard Caton of Liverpool, England, published reports on his detection of electrical activity in animal brains. Fifteen years later, a Polish physiologists, Adolf Beck, detected regular electrical patterns in the cerebral cortexes of dogs and rabbits.Note that the last statement is not quite right - since EEG is used in a lot of other fields. It was the authors conclusion to the use of EEG to decide on clinical death. The important quote I wanted was that EEG has changed little.
The successful introduction of Einthoven's electrocardiograph (EKG) in 1902 for disease diagnosis inspired further research on the brain. Investigators working on the brain quickly adopted the highly sensitive Einthoven string galvanometer. In 1914 Napoleon Cybulski and S. Jelenska-Macieszyna at the University of Crakow published their tracings taken with an Einthoven galvanometer from a dog during an epileptic seizure. The development of the triode amplifier for small voltages in radio signaling during World War I made it even easier to record the very small electrical signals in the brain.
The EEG has changed little since its first human applications in 1920s, but its function has reversed completely, from providing a characteristic tracing of life to demonstrating its absence. 
 M.A.B. Brazier, A history of the electrical activity of the brain; the first half-century, New York: Macmillan, 1961.
 J. Empson and M.B. Wang, Sleep and dreaming, Houndmills, Basingstoke, Hampshire ; New York, N.Y.: Palgrave, 2002.
 S. Finger, Origins of neuroscience : a history of explorations into brain function, New York: Oxford University Press, 1994, pp. 41-42.
 J.A. Hobson, The dreaming brain, New York: Basic Books, 1988, pp. 115-116.
 R. Bud and D.J. Warner, Instruments of science : an historical encyclopedia, New York: Science Museum, London, and National Museum of American History, Smithsonian Institution, in association with Garland Pub., 1998, pp. 207-208.
What happened next was the most valuable trading lesson for me, ever. Not being a participant, I found myself observing the markets, e.g., (NasdaqSC:^IXIC - News) very objectively. I saw stocks, sectors, indices all set up. Further, I saw market timing signals beginning to kick in. I reached a point where I simply couldn't stand it anymore. I stepped in and was immediately rewarded with nice profits. I got too busy to actively manage the positions, so I let trailing stops take me out. To my surprise, I ended up staying with the positions longer and was able to ride out larger moves, because I wasn't trying to micromanage the trades.
What's this, a Microsoft blogging tool? How do I enter equations?
Most of my code for doing brain-wave recognition has been written in C++. I spent a lot of my time making beautiful objects incorportating FFTW, valarray and Lapack (via the Sun Performance Library). Though I wrote to opimize code reuse - things are just getting to be too complicated. Over the last few months, I've read a number of articles talking about how Matlab now uses FFTW and Lapack... perhaps it is time to switch back? Before I did, I wanted to check on timing... shouldn't compiled code be faster? Incidentally, time consumption in my code is basically in the linear algebra, so if Matlab uses the same libraries... After a bit of searching, I found a very useful article entitled "Timing Comparisons of Mathematica, MATLAB, R, S-Plus, C & Fortran".
My Perl is pretty rusty. I found an excellent tutorial at NCSA.
I tried Newark Inone's highly advertised online help, and this is what I got:
Below is a transcript of your session.
Thank you for contacting our live service center.
You: Can you help me
Automated Agent: Thank you for contacting Newark InOne website support. All of our live account representatives are helping others at this time. Please email us at firstname.lastname@example.org. If you need immediate support please contact 1-800-NEWARK-T (1-800-639-2758)and select option 1.
Ready to start a live conversation. Your representative is Automated Agent Please wait for the next available representative Welcome to our Live Customer center!
For the record, I've been on the phone waiting for the "next customer service representative" for about 20+ minutes now...
If you cannot read the equations, or if they look like junk, you may need to install MathPlayer. Look for the icon on this page.
Hyvärinen gives the log-likelihood L in a noise-free Independent Component Analysis (ICA) model as
Where does this come from? The i index the output component weights, i.e., each row of . The t index time. Purcell's Maximum Likelihood Estimation Primer was especially helpful here. The t are analogous to each coin toss in the coin toss example. Essentially, the question asked by maximum likelihood is, given a set of observed data, what is the probability distribution most likely to have produced the results. In this case, the probability density for each component is represented by . In this way we can estimate, using data on hand, the probability distribution of each independent component. Now, how does the determinant come into place here?
Forgetting about ICA for the moment, consider a set of uncorrelated random variables each with a "known" distribution . We then sample the set of random variables times. It's clear that the probability of having a certain result will be given by something like
Hmm, that looks almost like what we want, but there is still no way to obtain the determinant. Consider a similar argument for the mixed signals, so that
where is the probability density for . Ahh, here we are. Let's look at the entire set of inputs
From ICA, I know that
To finish this off, we look to our buddy Hyvärinen (who incidentally looks quite young), who says in his tutorial that, in general, for any random vector with density and for any matrix , the density of   is given by
Oh, wait, I'm confused again... I can see how
though this doesn't match L. I can also see how
which looks closer, but using the wrong probability density. Could it be somehow that the summation can allow us to use the "known" density? The question is whether the following statement is true:
I don't off-hand see how it can be...
I figured it out with some help from Ken. It's really quite straight-forward. The hint was not quite right. For , the correct relation is
making it quite clear that