Reply to topic  [ 7 posts ] 
fMRI: software bug could invalidate 15 yrs of brain research 
Author Message
User avatar

Joined: Fri May 06, 2011 5:11 pm
Posts: 1400
Reply with quote
Post fMRI: software bug could invalidate 15 yrs of brain research
http://www.sciencealert.com/a-bug-in-fm ... s-discover

beginExcerpt
There could be a very serious problem with the past 15 years of research into human brain activity, with a new study suggesting that a bug in fMRI software could invalidate the results of some 40,000 papers.

That's massive, because functional magnetic resonance imaging (fMRI) is one of the best tools we have to measure brain activity, and if it’s flawed, it means all those conclusions about what our brains look like during things like exercise, gaming, love, and drug addiction are wrong.

"Despite the popularity of fMRI as a tool for studying brain function, the statistical methods used have rarely been validated using real data," researchers led by Anders Eklund from Linköping University in Sweden assert.

The main problem here is in how scientists use fMRI scans to find sparks of activity in certain regions of the brain. During an experiment, a participant will be asked to perform a certain task, while a massive magnetic field pulsates through their body, picking up tiny changes in the blood flow of the brain.

These tiny changes can signal to scientists that certain regions of the brain have suddenly kicked into gear, such as the insular cortex region during gaming, which has been linked to 'higher' cognitive functions such as language processing, empathy, and compassion.

Getting high on mushrooms while connected to an fMRI machine has shown evidence of cross-brain activity - new and heightened connections across sections that wouldn’t normally communicate with each other.

It’s fascinating stuff, but the fact is that when scientists are interpreting data from an fMRI machine, they’re not looking at the actual brain. As Richard Chirgwin reports for The Register, what they're looking at is an image of the brain divided into tiny 'voxels', then interpreted by a computer program.

"Software, rather than humans ... scans the voxels looking for clusters," says Chirgwin. "When you see a claim that ‘Scientists know when you're about to move an arm: these images prove it,' they're interpreting what they're told by the statistical software."

To test how good this software actually is, Eklund and his team gathered resting-state fMRI data from 499 healthy people sourced from databases around the world, split them up into groups of 20, and measured them against each other to get 3 million random comparisons.

They tested the three most popular fMRI software packages for fMRI analysis - SPM, FSL, and AFNI - and while they shouldn't have found much difference across the groups, the software resulted in false-positive rates of up to 70 percent.

And that’s a problem, because as Kate Lunau at Motherboard points out, not only did the team expect to see an average false positive rate of just 5 percent, it also suggests that some results were so inaccurate, they could be indicating brain activity where there was none.

"These results question the validity of some 40,000 fMRI studies and may have a large impact on the interpretation of neuroimaging results," the team writes in PNAS.

The bad news here is that one of the bugs the team identified has been in the system for the past 15 years, which explains why so many papers could now be affected.

The bug was corrected in May 2015, at the time the researchers started writing up their paper, but the fact that it remained undetected for over a decade shows just how easy it was for something like this to happen, because researchers just haven't had reliable methods for validating fMRI results.

Since fMRI machines became available in the early '90s, neuroscientists and psychologists have been faced with a whole lot of challenges when it comes to validating their results.

One of the biggest obstacles has been the astronomical cost of using these machines - around US$600 per hour - which means studies have been limited to very small sample sizes of up to 30 or so participants, and very few organisations have the funds to run repeat experiments to see if they can replicate the results.

The other issue is that because software is the thing that's actually interpreting the data from the fMRI scans, your results are only as good as your computer, and programs used to validate the results have been prohibitively slow.

But the good news is we've come a long way, and Eklund points to the fact that fMRI results are now being made freely available online for researchers to use, so they don't have to keep paying for fMRI time to record new results, and our validation technology is finally up to snuff.

"It could have taken a single computer maybe 10 or 15 years to run this analysis," Eklund told Motherboard. "But today, it’s possible to use a graphics card", to lower the processing time "from 10 years to 20 days".

So going forward, things are looking much more positive, but what of those 40,000 papers that could now be in question?

Just as we found out last year that when researchers tried to replicate the results of 100 psychology studies, more than half of them failed, we're seeing more and more evidence that science is going through a bit of a 'replication crisis' right now, and it's time we addressed it.

Unfortunately, running someone else's experiment for the second, third, or fourth time isn't nearly as exciting as running your own experiment for the first time, but studies like this are showing us why we can no longer avoid it.
end


Here's hoping Gemma will now give me credit for warning about the general nascence and primitive understandings of MRI imaging techniques (including fMRI). Gemma still has the chance to show us that she is not a gatekeeper herself (witting or otherwise) by properly acknowledging that there is indeed insufficient science to identify sociopaths.

Chico is a fifth column troll so his interpretation of the above software bug finding will be presented in the narrative of the gatekeeper, and should be taken with a grain of salt ... and for those of us who are seasoned recipients of his mendacity, a dump truck filled with road salt.


Pax

ps: I thank the person who sent me the email pointing to the software bug in fMRI imaging. Very much appreciated.

_________________
Flight that sends into the clouds brings wings to rest upon the boughs. Then further down to the liquid lawn, to serve as sentries for the gliding swan. Curve, a perfect turning of the line between here and Heaven, with extensions into infinitum.


Fri Jul 08, 2016 6:33 am
Profile

Joined: Sun May 22, 2016 10:26 am
Posts: 57
Reply with quote
Post Re: fMRI: software bug could invalidate 15 yrs of brain rese
Zook this is an awesome, awesome find - thank you very much, and thanks extended to your source also.

Robert Cox, Stephen Smith, Mark Woolrich, Karl Friston, and Guillaume Flandin provided valuable feedback for this study with Flandin being part of a group that produced the following paper:
NIDM - Results: a Neuroimaging Data Model to share brain mapping statistical results
http://biorxiv.org/content/biorxiv/earl ... 8.full.pdf

What is really exciting about this is why there is serious investment in getting neuroimaging studies right which is to provide unambiguous neuroimaging data that can be data shared between research/scientific facilities and subsequently used for research integrity and informed decision making.
(Oh and, prior primitive data analysis could be invalidated, or once re-testing occurs could be solidly supported by the upgrade. COULD does not mean IS! Gosh some people are so quick to dismiss and promote a negative - which in fact is not what this revealing research is from many angles.)

Extract:
Only a tiny fraction of the data and metadata produced by an fMRI study is finally conveyed to the community. This lack of transparency not only hinders the reproducibility of neuroimaging results but also impairs future meta-analyses.
[...]
The goal of NIDM is to provide a complete description of provenance for neuroimaging studies, from raw data to the final results including all the steps in-between. The core motivation of NIDM is to support data sharing and data reuse in neuroimaging by providing rich machine-readable metadata. Since its first developments in 2011, NIDM has been an ongoing effort and is currently comprised of three complementary projects: NIDM- experiment, NIDM-Workflows and NIDM-Results. NIDMExperiment targets the representation of raw data generated by the scanner and information on the participants. NIDM-Workflows focuses on the description of data analysis parameterization, including detailed software-specific variations. NIDM-Results, presented here, deals with the representation of mass-univariate neuroimaging results using a common descriptive standard across neuroimaging software packages.

A motivating use case for NIDM-Results was neuroimaging meta-analysis, but the format also produces a detailed machine-readable report of many facets of an analysis. The implementation of NIDM-Results within SPM and FSL, two of the main neuroimaging software packages, provides an automated solution to share maps generated by neuroimaging studies along with their metadata. While NIDM-Results focuses on mass-univariate studies and is mostly targeted at fMRI, the standard is also suitable for anatomical MRI (with Voxel-Based Morphometry), and Positron Emission Tomography (PET). It was developed under the auspices of the International Neuroinformatics Coordinating Facility (INCF) Neuroimaging data sharing Task Force (NIDASH) which comprises a core group of experts representing more than ten labs involved in various facets of neuroimaging (including statistical analysis, informatics, software development, ontologies). It also involved close collaboration with the main neuroimaging software developers.
The format is natively implemented in SPM and a NIDM-Results exporter is available for FSL and will be integrated in a future version of FSL. Both NeuroVault and CBRAIN support export to NIDM-Results and NeuroVault additionally can import NIDM-Results archives.
[…]
Data sharing in the neuroimaging community is still restrained by a number of psychological and ethical factors that are beyond the scope of the current paper (see (R. A. Poldrack and Gorgolewski 2014; Poline et al. 2012) for a review). Those will have to be addressed in order for data sharing to become common practice in the neuroimaging community. In an effort to address the technological barriers that make data sharing challenging, here we have proposed a solution to share neuroimaging results of mass univariate analyses.

As a first step to provide machine-readable metadata, we restricted our scope to information that was automatically extractable and attributes that were crucial for meta-analysis (e.g. number of subjects). This limited the amount of information that could be represented. For instance, the description of the paradigm was limited to the design matrix and a list of regressor names. Ideally, to be able to automatically query for studies of interest, one would need a more thorough description of the paradigm and of the cognitive constructs involved. While vocabularies are becoming available (e.g. Cognitive Atlas24 (R. a. Poldrack et al. 2011) and CogPO25 (Turner and Laird 2012)), description of fMRI paradigms is still a topic of active research. Some level of manual interaction to select contrasts of interest is therefore needed to compute a meta-analysis based on NIDM-Results packs. Nevertheless, NIDM-Results allows for the automation of part of the meta-analysis as described in our results. In the future, as a consensus develops on the description of paradigms, NIDM-Results could easily be extended to include this information.
Similarly, NIDM-Results could be extended to match emerging best practices (such as (Nichols et al. 2016)).


So the tech geeks are onto it and it's looking really good.
Now it's up to the community to catch-up with the ethical discussions and to provide enough circumstantial evidence, (of which there are enough of those fingerprints at the scene of a crime), to bring about more significant testing re psychopathy, if of course it isn't already under way and will be found in due course as it has already been a year since the upgraded tech was available - hmmm wonder why there needs to be ethical discussions if there is no evidence for psychological dispositions. Especially as a significant funding contributor for the upgrade is the Laura and John Arnold Foundation and one of their initiatives is:

LJAF’s Criminal Justice initiative aims to reduce crime, increase public safety, and ensure the criminal justice system operates as fairly and cost-effectively as possible. In order to achieve these goals, we develop, incubate, and spread innovative solutions to criminal justice challenges. We assemble teams of experts from both inside and outside the criminal justice field to conduct research projects, create tools for practitioners, and partner with local jurisdictions to pilot and test new policies and practices. Our projects use data and technology to drive innovation and accelerate the adoption of proven reforms.

So once again, thanks Zook, this is a Fantastic Find and I am sincerely very, very grateful to you for sharing - (also as there are great off-shoot links for me coming from this paper).


Fri Jul 08, 2016 12:25 pm
Profile
Site Admin
User avatar

Joined: Tue May 03, 2011 6:06 pm
Posts: 11843
Reply with quote
Post Re: fMRI: software bug could invalidate 15 yrs of brain rese
UncleZook wrote:
Chico is a fifth column troll so his interpretation of the above software bug finding will be presented in the narrative of the gatekeeper, and should be taken with a grain of salt ...

What are you so afraid of, Zook, that you have to try to convince others that they should not listen to what I have to say on the subject? Are you not interested in uncovering the truth? :lol:

UncleZook wrote:
Here's hoping Gemma will now give me credit for warning about the general nascence and primitive understandings of MRI imaging techniques (including fMRI). Gemma still has the chance to show us that she is not a gatekeeper herself (witting or otherwise) by properly acknowledging that there is indeed insufficient science to identify sociopaths.

You are one sick cookie, Zook, trying to manipulate Gemma like that. Sociopaths always expose themselves, if you know what to look for. And it's right there in print from the sociopath himself, dear UncleZook.

UncleZook wrote:
ps: I thank the person who sent me the email pointing to the software bug in fMRI imaging. Very much appreciated.

Your source feels like a Shadowself to me, in other words, another vengeful sociopath I have exposed in the past that has "hit pay dirt" information that she wants you to hammer me with. And of course, you are happy to oblige, due to a convergence of interests between you two, i.e. a common deviant psychology.

:face:

OK, Zook, once again, let's dance. Too bad you didn't take my advice.


First, did you even read the original report ?!

Of course you didn't. You never do. And it always bites you in the ass. This will not be an exception.


Let's examine the premises this study rests on:

Quote:
Because two groups of subjects are randomly drawn from a large group of healthy controls, the null hypothesis of no group difference in brain activation should be true. Moreover, because the resting-state fMRI data should contain no consistent shifts in blood oxygen level-dependent (BOLD) activity, for a single group of subjects the null hypothesis of mean zero activation should also be true. -- page 1

That's a lot of should's! So we take a bunch of fMRI data, allegedly from "healthy controls" (how do we know this?), and we assume that they should pretty much all look the same, because allegedly they are all resting-state (how do we know this?). So the assumption is that the resting state brain scan data of an allegedly random group of humans won't show much variation. How do we know that assumption is true? Don't people all have different brains, just like they all have different fingerprints?

Quote:
Resting-state data should not contain systematic changes in brain activity, but our previous work (14) showed that the assumed activity paradigm can have a large impact on the degree of false positives. Several different activity paradigms were therefore used... -- page 2

So the prior assumption from page 1 is admittedly a problem, as the researchers have already experienced, because their "assumed activity paradigm" couldn't be relied upon!

That's just the beginning of the problems with this study, Zook, starting with its faulty premises, the same error you always make! Starting with faulty premises is more than enough to call the study into question.

But guess what, Zook? We don't even have to call it into question, because it doesn't really impact the prior work done on comparing brain activity in sociopaths versus non-sociopaths. Why?

Comparative brain scans between two psychologically distinct groups are likely to be completely unaffected by these alleged software "bugs", since the effects of the suspected errors are intrinsic to both. If we are comparing images and looking for gross differences and find them, the bug is not going to be the source of those differences. What this means is that studies comparing "normal" brain activity to "sociopathic" brain activity will not be invalidated due to the fMRI software processing if the same software processing is used for both!

fMRI doesn't have to be perfect or even locationally precise (within the brain) when scanning for sociopaths versus non-sociopaths. It just needs to consistently differentiate between the two. And it does, even with software bugs. That has been clearly proven and clearly confirmed many times by other more traditional methods of diagnosing sociopaths.

Zook, it is not an ad hominem attack in this case when I say you are an idiot.

_________________
It's not that we can't handle the truth. It's that they can't handle us if we know the truth.


Sat Jul 09, 2016 4:16 am
Profile
User avatar

Joined: Fri May 06, 2011 5:11 pm
Posts: 1400
Reply with quote
Post Re: fMRI: software bug could invalidate 15 yrs of brain rese
Gemma wrote:
Zook this is an awesome, awesome find - thank you very much, and thanks extended to your source also.


You're welcome ... but only if you truly understand the import of what is being reported .. namely that up until last year, when the bug has purportedly been fixed, all reasearch prior to that point has been effectively invalidated.

http://www.sciencealert.com/a-bug-in-fm ... s-discover

beginExcerpt
To test how good this software actually is, Eklund and his team gathered resting-state fMRI data from 499 healthy people sourced from databases around the world, split them up into groups of 20, and measured them against each other to get 3 million random comparisons.

They tested the three most popular fMRI software packages for fMRI analysis - SPM, FSL, and AFNI - and while they shouldn't have found much difference across the groups, the software resulted in false-positive rates of up to 70 percent.

And that’s a problem, because as Kate Lunau at Motherboard points out, not only did the team expect to see an average false positive rate of just 5 percent, it also suggests that some results were so inaccurate, they could be indicating brain activity where there was none.

"These results question the validity of some 40,000 fMRI studies and may have a large impact on the interpretation of neuroimaging results," the team writes in PNAS.

The bad news here is that one of the bugs the team identified has been in the system for the past 15 years, which explains why so many papers could now be affected.

The bug was corrected in May 2015, at the time the researchers started writing up their paper, but the fact that it remained undetected for over a decade shows just how easy it was for something like this to happen, because researchers just haven't had reliable methods for validating fMRI results.
end


In fact, Eklund and his team found an error rate of 70% where only 5% was expected. You and Chico were both promoting the high degree of accuracy in fMRI testing for sociopaths ... before I got involved with my warnings about the primitive state of science in identifying sociopaths.

The honest thing to do is to admit that you were wrong and update your understandings on the topic. You may even be accorded respect if you continue supporting fMRI but with much less conviction than you have been doing and more circumspect.

But instead, you're trying to move off one error without acknowledging it directly ... onto another error, namely, the error of assuming that the software bug was the only problem with the fMRI imaging techniques.

I already listed another problem in a previous post (or perhaps in one of my audio recordings) where I suggested that the testing of an individual is plagued with the immediacy of that individual's psychological state. The majority of the population (68%) is contained within the dynamic mass between +1 and -1 standard deviations. If this population is tested when their psychological nature is conditionally empathic, they will show one fMRI mapping ... if they are tested when their psychological nature is conditionally sociopathic, they will show a different fMRI mapping. IOW, the same individual can exhibit different mappings depending on the day of their testing and the state of their psychological nature at that time. This "mood difference" would be expected to be less in the predominantly genetic psychologIcal natures, but as long as there is a strong environmental component involved, the mood at the time of the testing will reflect on the physiological blood flows in the brain (that are being monitored). Even at 3 std's in either direction, we still have a measurable if diminished environmental component.

Then there's the problem of interpreting the blood flow results, and I'm not talking about software technician interpretations, rather, those interpretations made by the neurologists themelves. Indeed, until one gets to the level of genetic mapping, e.g. between the organic brain and the superorganic mind ...all we have is primitive science and those who are willing to peddle primitive science for their own purposes. In the end, sociopaths are still best identified by their own behavior ... and not by brave new world posturings about pre-identification.

But here's the greater problem: even if you identify and disqualify those individuals that have exhibited sociopathic behavior, you will still have not slowed the push for full spectrum dominance, the root cause of which is not sociopathy or sociopaths but extensive secretive sinister organization.

So looking for fMRI methods to slow the pace of the global evils will not have accomplished much in the end, merely removed old sociopaths from positions of power so that new sociopaths can occupy the chair, as it were ... and largely because only those bent on gaining power will aspire to the chair. Genuine empaths are not attracted by power and control over others; indeed, power and control are antithetical to empathic natures. So who is going to manage things when management primarily attracts sociopathic natures? Quite the conundrum, hey, Gemma?

Quote:
[...]
Similarly, NIDM-Results could be extended to match emerging best practices (such as (Nichols et al. 2016)).[/color]

So the tech geeks are onto it and it's looking really good.
Now it's up to the community to catch-up with the ethical discussions and to provide enough circumstantial evidence, (of which there are enough of those fingerprints at the scene of a crime), to bring about more significant testing re psychopathy, if of course it isn't already under way and will be found in due course as it has already been a year since the upgraded tech was available - hmmm wonder why there needs to be ethical discussions if there is no evidence for psychological dispositions. Especially as a significant funding contributor for the upgrade is the Laura and John Arnold Foundation and one of their initiatives is:

LJAF’s Criminal Justice initiative aims to reduce crime, increase public safety, and ensure the criminal justice system operates as fairly and cost-effectively as possible. In order to achieve these goals, we develop, incubate, and spread innovative solutions to criminal justice challenges. We assemble teams of experts from both inside and outside the criminal justice field to conduct research projects, create tools for practitioners, and partner with local jurisdictions to pilot and test new policies and practices. Our projects use data and technology to drive innovation and accelerate the adoption of proven reforms.

So once again, thanks Zook, this is a Fantastic Find and I am sincerely very, very grateful to you for sharing - (also as there are great off-shoot links for me coming from this paper).


John Arnold was a hedge fund operator at Enron prior to amassing even greater wealth (also by illicit means, for there is no legitimate way to accumulate that kind of money outside speculation and derivative markets; Soros, Milliken, Arnold ... that's three-of-a-kind in a poker game, if you're into bluffing, Gemma). Arnold made 8 million dollars and disappeared when all the pensioners and people on fixed incomes lost their life savings, more or less. Hedge fund operators are invariably sociopaths. They feed off other people's wealth and have no compunction in calculating individuals such as senior citizens as soft target casualties, as they conduct their aggressions and attacks for victories and spoils. If you can find a hedge fund operator that you think is not a sociopath, please point them out to me and I'll expose them for you.

Also, Norman Dodd warned us about foundations and the conspiracy for one world government, in an Edward Griffin interview in 1982. If you think that Arnold is a philanthropist, then the following youtube video is a mustWatch material:


if you still think Arnold is a genuine philanthropist, or that mega-foundations are funded by genuine caring souls, then you can probably get pieces of the Brooklyn Bridge on eBay.com. If they're all sold out ... I can probably get you a good deal on rusted metal off an old abandoned bridge spanning a fairly large-sized creek here in Nova Scotia. Once again, you're welcome.


Pax

_________________
Flight that sends into the clouds brings wings to rest upon the boughs. Then further down to the liquid lawn, to serve as sentries for the gliding swan. Curve, a perfect turning of the line between here and Heaven, with extensions into infinitum.


Sat Jul 09, 2016 7:00 am
Profile
Site Admin
User avatar

Joined: Tue May 03, 2011 6:06 pm
Posts: 11843
Reply with quote
Post Re: fMRI: software bug could invalidate 15 yrs of brain rese
UncleZook wrote:
... but only if you truly understand the import of what is being reported .. namely that up until last year, when the bug has purportedly been fixed, all reasearch prior to that point has been effectively invalidated.

BS, Zook. All research has not been invalidated. This is just more of your "twist and shout".

UncleZook wrote:
You and Chico were both promoting the high degree of accuracy in fMRI testing for sociopaths ...

BS, Zook. We never emphasized accuracy. We only held fMRI up as an example of promising technology.

UncleZook wrote:
The honest thing to do is to admit that you were wrong and update your understandings on the topic. You may even be accorded respect if you continue supporting fMRI but with much less conviction than you have been doing and more circumspect.

Manipulation, Zook, a reflection of your own malfeasance. You are such a sociopath.

UncleZook wrote:
But instead, you're trying to move off one error without acknowledging it directly ... onto another error, namely, the error of assuming that the software bug was the only problem with the fMRI imaging techniques.

BS, Zook. The software bug has no significant impact on comparative studies. No one has claimed the sophisticated and complicated process of testing for sociopaths is problem-free. That's your straw man argument.

UncleZook wrote:
I already listed another problem in a previous post (or perhaps in one of my audio recordings) where I suggested that the testing of an individual is plagued with the immediacy of that individual's psychological state. The majority of the population (68%) is contained within the dynamic mass between +1 and -1 standard deviations. If this population is tested when their psychological nature is conditionally empathic, they will show one fMRI mapping ... if they are tested when their psychological nature is conditionally sociopathic, they will show a different fMRI mapping. IOW, the same individual can exhibit different mappings depending on the day of their testing and the state of their psychological nature at that time.

More BS, Zook. You clearly have no understanding of the environment in which testing occurs. Nor do you understand that the bulk of the population does not swing between "psychological natures" like leaves in the wind. You are being so deceptive in your posts that it only confirms you are a sociopath.

UncleZook wrote:
This "mood difference" would be expected to be less in the predominantly genetic psychologIcal natures, but as long as there is a strong environmental component involved, the mood at the time of the testing will reflect on the physiological blood flows in the brain (that are being monitored). Even at 3 std's in either direction, we still have a measurable if diminished environmental component.

Pure BS, Zook. Note that your claim that "the mood at the time of the testing will reflect on the physiological blood flows in the brain (that are being monitored)" could easily account for the variations attributed to the alleged software bug in the study you have cited, if your claim was true.

UncleZook wrote:
Indeed, until one gets to the level of genetic mapping, e.g. between the organic brain and the superorganic mind ...all we have is primitive science and those who are willing to peddle primitive science for their own purposes.

That is the crux of your argument, that science is primitive and highly flawed. Yet you use science to support your claim. You simply cherry pick in the pursuit of supporting your BS arguments, Zook.

UncleZook wrote:
But here's the greater problem: even if you identify and disqualify those individuals that have exhibited sociopathic behavior, you will still have not slowed the push for full spectrum dominance, the root cause of which is not sociopathy or sociopaths but extensive secretive sinister organization.

No, the greater problem is listening to sociopaths and their BS, as we are doing now with you.

UncleZook wrote:
So looking for fMRI methods to slow the pace of the global evils will not have accomplished much in the end, merely removed old sociopaths from positions of power so that new sociopaths can occupy the chair, as it were ...

Screening everyone for sociopathy on a regular basis can easily solve that problem.

UncleZook wrote:
Genuine empaths are not attracted by power and control over others; indeed, power and control are antithetical to empathic natures. So who is going to manage things when management primarily attracts sociopathic natures? Quite the conundrum, hey, Gemma?

Not a conundrum at all. People from the middle of the bell curve can replace sociopaths in positions of power and control. We are not suggesting replacing sociopaths with "genuine empaths". That's your straw man argument.

UncleZook wrote:
John Arnold was a hedge fund operator at Enron ... Also, Norman Dodd warned us ...

Why aren't you dismissing the scientists behind your cited study as gatekeepers and Rothschild stooges, Zook? Your deception knows no limits.

_________________
It's not that we can't handle the truth. It's that they can't handle us if we know the truth.


Sat Jul 09, 2016 5:14 pm
Profile
Site Admin
User avatar

Joined: Tue May 03, 2011 6:06 pm
Posts: 11843
Reply with quote
Post Re: fMRI: software bug could invalidate 15 yrs of brain rese
Quote:
40,000 scientific papers invalidated. And from what I gather, not everyone is sure all the problems with MRI have been corrected. -- source

No, that's not UncleZook speaking, it's Jon Rappoport, respected alternate media rogue journalist who gets a lot of things right. But he also gets some things wrong.

Imagine every volt meter made had the same circuit design error that misreported the true voltage. Does that invalidate every measurement the volt meters made? No. The measurements would obviously be affected, but they would all be consistently and reliably affected. They could all be corrected by understanding how the design error changed the readings. The conclusions of comparative studies would be unchanged since voltage deltas would be unchanged whether you use the affected voltages or the corrected voltages.

And like Zook, Jon doesn't seem to differentiate between fMRI and MRI. MRI imaging can be quickly verified during surgery, and any discrepancies would be quickly noticed. fMRI is not easily verifiable, but since it is typically used comparatively, looking at signal deltas, the results could still be quite valid despite algorithm flaws.

It's disturbing to see the poor reasoning human's employ to jump to the conclusions they desire. It's as if no one can think independently in an unbiased way. 40,000 scientific papers invalidated? Are we going to rigorously analyze even one of those studies to determine the truth of that claim? Or are we going to dismiss them all with the wave of a hand based on the logic that "any error = invalid"? Every scientific study is supposed to calculate a margin of error, as all measurements contain error. This was the basic procedure being taught in my college physics lab in 1978. Perhaps Jon Rappoport and UncleZook didn't have the same educational opportunities that I had, or perhaps they interpreted things differently than I did. Or maybe they have already jumped to a conclusion that they feel they must justify by carefully selecting the most supportive evidence. But is that truth-seeking? I suggest it is not.

_________________
It's not that we can't handle the truth. It's that they can't handle us if we know the truth.


Wed Mar 15, 2017 8:24 am
Profile
Site Admin
User avatar

Joined: Tue May 03, 2011 6:06 pm
Posts: 11843
Reply with quote
Post Re: fMRI: software bug could invalidate 15 yrs of brain rese
fMRI brain scans are being pursued by intelligence agencies as an improved lie detector. Evidently 15 years of fMRI brain research is not being invalidated at all.

Quote:
The need for a better way to assess credibility was underscored by a 2002 report, The Polygraph and Lie Detection, by the National Research Council. After analyzing decades of polygraph use by the Pentagon and the FBI, the council concluded that the device was still too unreliable to be used for personnel screening at national labs. Stephen Fienberg, the scientist who led the evaluation committee, warned: "Either too many loyal employees may be falsely judged as deceptive, or too many major security threats could go undetected. National security is too important to be left to such a blunt instrument." The committee recommended the vigorous pursuit of other methods of lie detection, including fMRI.

"The whole area of research around deception and credibility assessment had been minimal, to say the least, over the last half-century," says Andrew Ryan, head of research at the Department of Defense Polygraph Institute. DoDPI put out a call for funding requests to scientists investigating lie detection, noting that "central nervous system activity related to deception may prove to be a viable area of research." Grants from DoDPI, the Department of Homeland Security, Darpa, and other agencies triggered a wave of research into new lie-detection technologies. "When I took this job in 1999, we could count the labs dedicated to the detection of deception on one hand," Ryan says. "Post-2001, there are 50 labs in the US alone doing this kind of work." -- source

Not only was fMRI already being vigorously explored for lie detection uses 15 years ago, but better brain scanning technologies have since been developed!

Quote:
For all the promise of fMRI lie detection, some practical obstacles stand in the way of its widespread use: The scanners are huge and therefore not portable, and a slight shake of the head – let alone outright refusal to be scanned – can disrupt the procedure. Britton Chance, a professor emeritus of biophysics at the University of Pennsylvania, has developed an instrument that records much of the same brain activity as fMRI lie detection – but fits in a briefcase and can be deployed on an unwilling subject. -- source

Too bad we don't have any technology to identify liars in the online forums! Heck, I would be satisfied if we just screened all forum posters for sociopathy using the bulky fMRI machines. That would cut down on the bulk of the mischief, I bet, as well as a good portion of the membership.

Hmm, I wonder if that's what happened in this forum, in a roundabout way... :lol:

_________________
It's not that we can't handle the truth. It's that they can't handle us if we know the truth.


Thu Sep 07, 2017 6:20 am
Profile
Display posts from previous:  Sort by  
Reply to topic   [ 7 posts ] 

Who is online

Users browsing this forum: No registered users and 2 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
cron
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group.
Designed by STSoftware.