ACCOMPLISHMENTS: Predicant Biosciences (4/'02 - 10/'05)
- Second person hired into Predicant as VP Engineering
- Recruited team to build instrumentation, control electronics, and associated software
- Rapidly came up to speed on Mass Spectrometry instrumentation physics and design, including Hadamard Transform technology licensed from Stanford. Developed simulation software to visualize instrument behavior, and signal processing software to post process simulation and empirical data.
- The team built:
1) Multiple generations of world class Electrospray Orthogonal Time of Flight Mass Spectrometers (ES-OTOF-MS). Five instruments were constructed and used internally for cancer biomarker discovery
2) Capillary Electrophoresis system control electronics to drive the microfluidic capillary electrophoresis chip developed by Predicant - Explored the application of Hadamard Transform techniques to improve instrument sensitivity. Developed new IP as part of that exploration.
DETAILS
The thesis behind Predicant Biosciences (originally called Biospect) was that technology had advanced to the point where small quantities of blood could be analyzed to uncover patterns of proteins that would indicate the presence of cancer. Some proteins might be present that weren’t normally there, others might go missing, others might be over-expressed and others might be under-expressed, or the protein might by modified in some way such as methylation. Proteins were chosen as the detection target as opposed to DNA because of the vast variety of proteins and their potential modifications produced in normal cellular metabolism provided a much richer search space. This signature composed of a relatively small number of proteins (say up to dozen) was called a biomarker, and an ideal cancer to target for biomarker based detection was one that could not be readily biopsied, which ruled out breast cancer as a good market choice for example.
A tricky aspect of protein based approach is that unlike DNA, proteins are fragile and large (relatively speaking), and require special handling and detection methods to detect them in minute (sub-nanomolar) concentrations.
Predicant was an unusual startup in part because it was founded by VCs and a Stanford Professor (Prof. Dick Zare) as opposed to a founding team of entrepreneurs that bring a plan to the VCs. The idea was to license technology from Stanford that was felt provided a detection sensitivity advantage, then hire 3 VPs to build the team and a first prototype, and then after a year or so bring in a CEO (Deborah Neff) with relevant industry and domain experience (Deb was to join the company in early August 2003). The VC’s involved, Prospect Venture Partners (Jim Tananbaum), Venrock (Bryan Roberts), Versant (Camile Samuels Pearson), and Advent (Shahzad Malik) brought in 3 VPs – John Stults, a Genentech Fellow who was VP of “everything wet”, Jonathan Heller – the VP of Bioinformatics, and me as VP of Engineering (basically everything not wet). I was told later by I can’t remember who that I was hired into Predicant in part because of a recommendation by Bill Unger of Mayfield that I was “someone who could invent”. I had a few years earlier pitched Bill on a home networking technology startup, called Travertine Systems, that I had incubated as an Entrepreneur-in-Residence at Benchmark Capital.
My father was an MD/PhD Oncologist and cancer researcher, and I have a number of siblings that are doctors – my brother was a radiation oncologist. Perhaps because of that previous exposure I was able to fairly quickly absorb and understand what was necessary from a medical and chemistry standpoint to do my job at Predicant. It was both gratifying and very exciting to work on a technology and product that could potentially make a substantial improvement in human health.
One of the best known methods for the detection of intact proteins is via electrospray time-of-flight mass spectrometry. Conceptually eTOF-MS is a straightforward approach – when one places an electric charge on a particle and subjects it to an electrostatic field, the particle will be accelerated. The velocity it will reach depends on its mass. Therefore, all one need do to determine both the presence of the particle, and its mass, is to measure the time it takes for the charged particle to travel some known distance to a detector after it has been accelerated by a known voltage. In practice however it is fiendishly difficult to build reliable TOF-MS instrumentation that can repeatably accomplish this task. For one thing the moving charged particles cannot collide with other gas particles, so the process must take place in a high vacuum. The instrument also requires many high voltage elements that must be very precisely fabricated and controlled.
So just how does one put fragile (“labile”) large intact protein molecules into a gas phase with charge on them so they can be electrostatically shepherded through a TOF-MS instrument? This is where electrospray comes in. The proteins are initially in a fluid that enters an electrospray tip at atmospheric pressure just in front of an opening in the TOF-MS instrument where there is a strong electric field between the tip and the TOF-MS entry orifice (photos of both the electrospray tip on the separations chip as well as the MS electrospray inlet area are shown below). As the electrospray tip narrows to a point, the field strength increases dramatically and the fluid is launched into the air as a fine spray of charged droplets (containing the proteins). Since the TOF-MS is operating at a high vacuum, air is being drawn into the instrument through the small entry orifice near the electrospray tip. The key to this process is that the droplets fully evaporate before entering the instrument (the entry area is heated to facilitate drying) – leaving the charge on the intact proteins that were contained in the droplets! A diagram of Predicant’s orthogonal TOF-MS instrument is shown directly below. The diagram shows the ions entering the instrument on the left, and being guided through successive chambers of lower pressure via quadrapoles and octapoles (that are driven at RF frequencies) until they reach the extraction chamber at which point the ions are pushed orthogonally by a pulsed electric field, travel down to an electrostatic reflector, and then ions of the same mass are reflected back up and refocused in both time and space onto the plane of the detector.
Predicant Electrospray Orthogonal Time-of-Flight Mass Spectrometer Schematic
Your Content Goes Here
In Predicant’s case, the electrospray tip is integrated into a plastic capillary electrophoretic separations chip to “pre-separate” the sample before it was introduced into the MS to improve overall sensitivity and prevent overwhelming the instrument with high abundance proteins. This chip was fabricated by Predicant, which included coating the channels with special coatings to give the desired electrical, hydrophobic, and/or hydrophilic properties. The chip required fairly high voltage, but more importantly VERY PRECISE current control. The electronics to control the chip was also developed and built by my group.
The first photo below is of one of the separation chips fabricated by Predicant where the fine electrospray tip can just be seen in the center of the left side of the chip. The photo of the right is of an inlet into the mass spectrometer to give one a sense of where and how the spray is drawn into the instrument.
Capillary Electrophoresis Separations Chip
with Tiny Electrospray Tip (just visible on left)
Mass Spectrometer Electrospray Inlet
Your Content Goes Here
My team designed and constructed all aspects of the OTOF-MS instrument and assembled 5 of them for internal use to analyze enough samples to find biomarkers. Mechanical design was performed using AutoCAD, and one of the physical instrument is shown in the photo below. The Predicant ES-OTOF-MS, was at the time one of the most sensitive, if not THE most sensitive, ES-OTOF-MS instruments available in the world – and I remain very proud of what the team achieved an a tight budget, tight schedule, and with no pre-existing internal infrastructure . The machine was 20x more sensitive than other commercially available systems, and had better than 30 zeptamole sensitivity. A slide showing performance of an infusion of Angiotensin into the instrument is shown in the slider at the bottom of this page. The instrument was, after some work, quite stable – demonstrating intensity/resolution CVs of less than 3% in 6-hour long infusions of a 1 uM solution of a peptide mixture. As anyone who has built new MS instruments will tell you – achieving stable behavior over long runs is typically difficult because usually there is some exposed insulating element somewhere that will charge up over time. In our case some of the challenge also was contaminates being emitted by the capillary electrophoresis chip (polymer and coating material etc).
Predicant ES-OTOF-MS Instrument
Your Content Goes Here
In a departure from standard approaches at that time I made the decision that the instrument control was to be implemented using LabView and National Instruments controller boards that plugged into a standard PC. This decision worked out very well and the team made rapid progress in bringing up the instruments. The photo below shows a screenshot of the instrument control screen, which highlights the many individually controlled voltages that are applied to elements throughout the instrument.
Predicant LabView Based Instrument Control Page
Your Content Goes Here
Building instruments like the BOTOF is as much an art as it is a science, and at the time there was a relatively small community of mostly Russian trained (at institutions such as the Moscow Engineering Physics Institute) individuals who led the design of Mass Spectrometry instruments world-wide. It rapidly became clear I needed to find one of those guys and fortunately I was able to locate Mikhail Belov at the Pacific Northwest National Laboratory in Richland Washington. Mikhail joined Predicant and became the chief designer of Predicant’s ES-OTOF-MS instruments (Mikhail was trained at the above mentioned institute). It was Mikhail’s experience, persistence, and intuition that got us through many a tight spot. Chuck Fancher also joined us from Stanford Research Systems and the two PhDs handled instrument design.
As was mentioned above, one of the rationale’s for starting Predicant was to leverage IP licensed from the Stanford OTL (Office of Technology Licensing) to increase instrument sensitivity. This IP was called Hadamard Transform Time-of-Flight Mass Spectrometry. In conventional TOF-MS instruments, after a packet of ions has been pushed out of the extraction region and is flying towards the detector, a new packet cannot be pushed out until the fastest (lightest) ion in a following packet cannot overtake the slowest (heaviest) ion in the previous packet before that slow ion has hit the detector. Since the extraction region fill time is much less than the flight time through through the flight tube, lots of precious protein ions are wasted. It would be nice if there was a way that ion packets could be pushed into the flight tube much more frequently despite the fact that ions from different packets will intermingle spatially as they travel through the tube. Of course it turns out there is a way, and the approach is reminiscent of modulation techniques used in spread sprectrum communications. Packets can be pushed out with a special time sequence – called a Hadamard sequence – that is pseudo-random and has special properties. The slow (heavy) ions from packet N will be overtaken by fast (light) ions from packet N+1, etc – but they can all be disambiguated and the spectrum recovered mathematically using an inverse Hadamard transform.
The best way to learn a new technology, especially when that technology is a complex physical system, is to simulate it – which is what I did. After a lot of reading (I recommend the book “Mass Spectrometery: Principles and Applications” by Hoffman and Strooband as an excellent first reference text) I wrote a detailed simulator in C that flies ions into and through the extraction region, then are pushed into the flight tube and fly through the flight tube and reflector to the detector. The simulator inputs electric field information calculated by the SIMION field simulator (a plot of a slice through a SIMION 3D field calculation in the extraction region of the OTOF-MS is shown in the slide show below). This simulator was intended to model the Hadamard Transform OTOF, and was instrumental in understanding and analyzing the empirical data we collected as well as to applying signal processing techniques to mitigate the effects of statistical variance we experienced.
The team went to significant efforts to try and leverage the licensed Stanford IP, but ultimately we sidelined the Hadamard approach in the interests of timely deliver of instruments to the rest of the company that both could be used to find biomarkers with the required specificity and sensitivity, and could also be reliable enough to be used in a CLIA (Clinical Laboratory Improvement Amendments) approved lab to process revenue generating samples. In the conclusion section of a report I authored in Sept ’03, I said that at the low protein concentration regime we were operating in (as the whole point was to find proteins at potentially threshold levels of detection), the beam ion statistics exhibited enough statistical fluctuations that the mathematically recovered spectrum had high levels of pseudo-noise that severely limited the SNR gain we had expected to see using the Hadamard modulation. There was temporal beam intensity fluctuations caused by fluctuations (at relevant frequencies) in the electrospray and other factors, and at threshold concentration levels there would be less than one ion of interest per packet pushed! Another contributor to signal fluctuations is the pulser electronics that push packets out of the extraction region. In normal OTOF-MS operation this pusher runs at a constant rate, but with the Hadamard beam modulation, the interval between pushes varies, which in turn causes very slight differences in the dynamics of the 500V push signal envelope – which in turn results in variances in the received signal and in flight time variances of ~10ns (fairly significant from an accuracy standpoint – see the example peak width in the slideshow below). At resolutions above 5000, push-to-push flight time variances of no more than 1ns could be tolerated. As Chuck Fancher summarized it “Multiplexing assumes accurate signal weighting. Harmonic or random fluctuation in the source degrades the final output. In short, the supposed gain from the improved duty cycle enabled by the Hadamard multiplexing approach might in fact be a loss – it all depends on the level of signal fluctuation”.
Another problem was that the electrostatic fields in the extraction region (the region from which the protein ions are accumulated and then launched or “pushed” into the flight tube) are not perfectly uniform. Upon entry into the extraction region the field lines are curved. These “fringing fields” can be seen in one of the extraction field simulation plots below in the slider where the extraction region is highlighted in red. These fringing fields meant that for packets that accumulated for shorter periods of time, per the standard Hadamard modulation sequence, those packets would be subject to more statistical variance because a much larger percentage of the ions in that packet would be subject to the non-uniform fringing fields at the entry to the extraction region. So I invented a solution, which was to create a longer sequence than standard that was formed by a linear combination of “identically sized” packets. When these “sub scans” are summed, they provide equivalent relative accumulated weighting to a standard Hadamard vector. We called this the linear combination mode and it is disclosed in the patent “Multiplexed Orthogonal Time of Flight Mass Spectrometer“. To facilitate increased uniformity of packets in the extraction region, and to facilitate more flexibility in modulating packets into the extraction region, a field termination grid was engineered and fabricated, and a photo of the grid is shown in the slide show below. Also shown in the slide show which shows much improved electrostatic field uniformity in the extraction region when the grid is used. Of course using a grid does not come for free, ~10% of ions will be lost simply impacting the grid wires – so any grids employed in the instrument in the ion beam flight path must provide a high value “payback”.
In practice, the Ion Funnel technology Mikhail brought with him from PNNL, which Predicant licensed, provided more gain than we would have ever seen with the Hadamard approach. The ion funnel is a large aperture electrostatic funneling device that captures most of the ions that enter the OTOF-MS that otherwise are lost using a conventional skimmer approach. The ion funnel can be seen on the left side of the instrument schematic above and a photo is included in the slide show.
To get a sense of the pseudo-noise problem, below are a couple of plots from the signal processing software I developed as part of the investigation to process the data taken in the lab as well as data from the simulations. On the left is the raw spectrum data. The heights of all the spikes should ideally be the same (except for two which should be precisely half value). On the right is the reconstructed spectrum using the inverse Hadamard Transform. Notice the one large “valid” signal on the right, and the periodic spurious spikes everywhere else. These spurious spikes are mathematical “pseudo-noise” and subtract from the ideal height of the main peak.
Uninverted Spectrum Showing Statistical Variation
Main Peak with Periodic Pseudo Noise
Your Content Goes Here
SOURCES OF ERROR: So why did Predicant fail? Opinions will vary of course – but I believe the fundamental issue was too many sources of variability (error) and not enough data to isolate, tease out and control for those sources of variance. The sources of variability simply overwhelmed the number of samples we had, to generate the amount of data we needed in the time and money we had, to find the necessary biomarkers and then to develop a scalable system that would provide the necessary sensitivity and specificity day in and day out in a high volume CLIA approved lab setting.
In any new complex measurement system – especially when you don’t know exactly what you are looking for – it is critical you thoroughly understand, control, and attempt to minimize all sources of error. In Predicant’s case, the main error sources (and there were a lot of them) can be broken down into:
- Sample variance – variability in the biological samples themselves
- Variance due to small sample load into the CE chip
- Sample capture and storage variance
- Sample preparation variance – preparation of the sample for introduction into the measurement system
- Manufacturing variance in the miniaturized capillary electrophoresis chip. These chips were USED ONCE, so every sample used a different chip. This includes a critical component of the overall system, which was the tiny electrospray tip that was integral to the chip
- Run time behavior variance of each chip – which includes external environmental factors (humidity, ambient temperature, dust, power quality, etc)
- Run time behavior variance of the detection apparatus – the mass spectrometer – which includes external environmental factors (humidity, ambient temperature, dust, power quality, etc)
- Variance/inaccuracy in the post data acquisition signal processing
- Variance/inaccuracy in the clustering and categorization (Biomarker detection) software
I will step through each of these – but before I do that one should not underestimate the overall error “magnifying effect” of dealing with small quantities of the substance being measured – in this case blood. Larger quantities will inherently exhibit averaging/normalizing effects which are beneficial for anomaly detection. And in a startup, at least in 2004, many steps were MANUALLY performed that would be automated in a more mature higher volume system – introducing more variance.
- Sample variance. Sample variance is significant and largely uncontrollable. This is simply human biology with all its dizzying diversity. So can we find biomarkers given this?
- Variance due to small sample load into the CE chip. The Capillary Electrophoresis (CE) chip only accepted a sample load of a few nL, which was on the order of 1000x less than an alternate separation technology such as Liquid Chromatography could support. This undoubtedly introduced variance that we never accurately measured as we never had a control to reference against.
- Sample capture and storage variance. This turned out to be huge and something I did not fully appreciate having no previous experience with a diagnostic startup. The importance of having lots of high quality samples is CRUCIAL. Ideally what one would do is build a prototype system and climb that learning curve, then go out and PROSPECTIVELY acquire a large number of blood samples using a precise well controlled protocol that is targeted for the result you are trying to achieve on the system you have built and stabilized. The issue of course is that for a startup it is very difficult to carry the company from a cash burn perspective for 18 months while you are out collecting samples. Hence you must make do with RETROSPECTIVELY collected blood samples from a variety of collaborators (primarily academic) that can be widely geographically scattered. So then key factors in sample collection and storage such as those below might all differ:
- sample age (this varied widely amongst our collaborators from whom we obtained samples)
- storage environment (temperature stability, at what temperature, etc)
- number of freeze thaw cycles (was there ever a power failure or failure in your freezer?)
- blood treatment/prep prior to freezing
- type of tube used (Tiger-top or other, etc)
- protocol followed by patient cohort prior to collection. Did the guy have a cheeseburger just before the blood draw?
- demographics, health status – is this a random sampling? or some kind of specific sub-group?
- I remember one of the other VPs, I think it was Jonathan Heller, saying to me, “we may not be able to find biomarkers – but we can tell with near 100% certainty which sample cohort any particular sample came from”. I didn’t like the sound of that at the time.
- Sample prep variance. Preparing the sample for introduction into the capillary electrophoresis chip (to “pre-separate” prior to introduction into the mass spectrometer). A principal goal here is elimination of high abundance proteins and other substances that muck things up downstream. This was a manual process that by consensus of the team at the time introduced a lot of variance. It was a high priority to address (I show a detailed list of variance mitigation priorities during 2004 are shown below in two slides in the slideshow).
- Chip Manufacturing variance. This was significant and it was unfortunately persistent – it kept coming back to bite us in unexpected ways. Part of the issue was that manufacturing of the capillary electrophoresis chip was somewhat of a makeshift process. Coming from the semiconductor world I was familiar with class 10 clean rooms, people in bunny suits, and robots running around carrying wafer boats. This was nothing like that. The process used to coat the mico-capillary channels in the chip is shown in the slider below. Take a look and think if such a process, performed manually, using plastic chips themselves exhibiting manufacturing variance might not introduce some variance.
A key component of the chip was the very fine electrospray tip at one end of the chip. Since the chip was plastic – shaping the tip geometry to very fine tolerance was difficult. Repeatability of spray tip geometry was crucial because with the electrospray process where the electric field gets exponentially stronger at the tip and has a strong effect on the spray characteristics. The electrospray could assume a number of different stable spray “modes” depending on tip geometry and other factors (such as proximity to the counter electrode (the mass spec), voltage ramp up, presence and pH of fluids at the tip, etc). All if this introduced variance. - Run time behavior variance of each chip. I have already mentioned the differences in electrospray behavior, but as a significant component of the spray variance one should consider the “small quantity” challenge mentioned above. With the very small amount of fluid traveling through the chip channels, very slight changes in flow rates, quantities, or pH would change the current and/or voltage present at the spray tip – affecting the spray and introducing variance. Chip manufacturing consistency along with coating consistency and repeatability was a significant challenge to the team, and as I recall they made heroic efforts to tame that beast. Given that each chip was a “use one and throw away” item, the chip manufacturing team was always busy. At one point it was noticed that humidity made a difference (I’m not sure whether it was isolated to the electrospray or to water getting into channels on the chip or what) – but attempts were made to create better environmentally controlled settings by enclosing instruments in plastic tents and piping in filtered air, etc.
- Run time variance in the detection apparatus. Mass spectrometers are cantankerous beasts. Running at high vacuum with noisy turbo-pumps, many high voltage elements inside the instrument requiring precise control, finicky detectors being sampled at multi-Ghz rates using ultra-high performance analog-to-digital converters, quadrapoles and hexapoles requiring precise RF drivers, etc. Every time you “broke vacuum” to crack open the instrument to get at something it would sometimes take days to bake out (aided by wrapping the instrument in heating blankets) the moisture that would coat the inside of the instrument (the high humidity of a South San Francisco location did not help here) in order to pump the instrument back down to near vacuum. The biggest challenge facing the mass spec team was instrument drift. Over time the sensitivity or accuracy would degrade. This was typically because some element somewhere in the instrument was not properly shielded or grounded and so would charge up and generate enough of an electrostatic field to change the behavior of the ion beam. What was especially tough was that we could spray into the instrument using a standard metal electrospray tip an infusion of a standard such as Neurotensin and the instrument(s) would be rock solid for 6+ hours (I still have the plots). But in the lab setting, using the plastic chips, the behavior was not so good. We learned that the chips were emitting polymers and other things that was not making the mass spec happy. But as the company climbed the learning curve as I recall the instrument performance in the lab became pretty stable.
- Variance and/or quality of post data acquisition signal processing. This was a black box that I can only assume was well managed in terms of running known “controls” through the software for quality assurance. Jonathan Heller, who ran that team, was a sharp guy.
- Variance/inaccuracy in the clustering and categorization (Biomarker detection) software. Jonathan’s team used the best known method at that time when faced with relatively small datasets that you wish to cluster and categorize. The primary classification method adopted is supervised learning technique called Support Vector Machine (SVM). There were experts in SVM nearby at Stanford, which was handy. The SVM approach creates a multi-dimensional feature space where the goal is to create multi-dimensional boundaries to cluster the samples. With a lot more data, today one might try some Deep Learning techniques, but with the small sample population we had non-probabalistic binary linear classifiers like SVM are probably the best choice. To visualize the data, Jonathan’s team used Spotfire – which at the time was probably the best visualization tool available.
In the Deep Learning world, insufficient data leads to a problem called overfit, where a too complex model appears to fit the data but then does not generalize. I’m not sure if there is a similar issue with the SVM approach used by Predicant – but there is no question that more data is better, and Predicant did not have enough quality samples.
A Fork in the Road not Taken: As early as 2004 Predicant was staying abreast of potential alternatives to the CE chip for initial separation. The primary potential alternative was Liquid Chromatography (LC). In May of 2004 the VP of “everything wet” (John Stults) reported the availability of High Pressure LC (HPLC) columns with elution speeds as fast as 5 min (comparable to the CE chip).
One of the biggest issues with the CE chip was the small sample load amount (a few nL – which no doubt contributed to variability) – vs the much larger sample loading supported into an LC column (on the order of ~1000x more). LC provided focusing (signal enhancement) which CE did not, thus significantly increasing sensitivity and the chance to discover biomarkers.
A downside to LC columns is that they are expensive, and although they could be re-used perhaps 200x, re-using them in a clinical lab setting did not make much sense.
In mid 2005 it was reported by the team that using a 1D LC column a 100x improvement in sensitivity was observed, and later a ~10x increase in component counts were demonstrated. CVs of the LC system (elution time, component count, and intensity) were also considerably lower than the CE based system. At the time we were simply not seeing enough component counts with the existing system, and so in July Mikhail Belov proposed an alternate biomarker discovery approach that would employ a 2D LC system to significantly improve sensitivity and component counts to facilitate biomarker discovery. 2D LC was proposed instead of HPLC to save cost.
The thinking was to use LC to discover better biomarkers and then seek more runway to mature the chip based CE approach and tune that platform for the discovered biomarkers in order to transfer to a CLIA lab setting. To facilitate transfer to the CE chip we would only load those fractions out of the LC system that were known to contain biomarkers.
However by this time the company was too invested in the chip based CE approach, the cash runway was short, and the proposal was not pursued.
After leaving Predicant, I did some consulting due-diligence in the medical device area (non-invasive blood glucose monitoring system) for Skymoon Ventues (see: SKYMOON-VENTURES), before landing back in the semiconductor space reporting to Dado Banatao at Tallwood Venture Capital as an Executive-in-Residence, see: TALLWOOD-VC
TAKEAWAYS:
- If you are building any kind of measurement system – know (and bound) your sources of variance! You should have a good analytical sense the impact these sources will have on your ability to measure/discriminate/discover whatever you are looking for – and how many sample points might be required to measure or detect your target with an acceptable confidence level given the best analysis approach then available.
- Working with blood is hard – especially with small quantities. I think the folks at Theranos would concur.
- If you need biological samples – it is definitely preferable if you can collect them prospectively using a protocol you have designed.
- Predicant had a bold vision for a noble cause – but it was a leap too far given the time and money available, the technology risks, and the many sources of variance.
RESOURCES:
Your Content Goes Here
Predicant Slide Show:
- Three Mass Spectrometers in the Lab
- First Predicant Org Chart
- Patent: Instrument
- Patent: Biomarker Detection
- Mass Spectrometer Specifications
- Instrument Performance Example: Neurotensin Spectrum
- Instrument Simulation Code Front Page
- Matlab Example
- Extraction Area Field Simulation Showing Fringing Fields
- Field Termination Grid to Minimize Fringing Fields
- Extraction Area Field Simulation after employing Field Termination Grid
- Instrument Source Area Schematic
- Instrument 3D Rendering
- Ion Funnel
- Capillary Electrophoresis Chip Coating Process
- Sources of Variance (Pg 1)
- Sources of Variance (Pg 2)