- Joined
- Nov 11, 2011
- Messages
- 12,895
- Reaction score
- 2,909
- Gender
- Undisclosed
- Political Leaning
- Slightly Conservative
Obviously you do not understand the nuance required to make sense of statistics. Sorry.
If global volcanic eruptions were 100 out of two million volcanoes, then the eruption rate would be low. But, if the 100 eruptions all occurred on a single small island in the Pacific which had only one volcano, then their eruption rate would be extremely high.
theres nothing sticky about it, bombing this clinic is a terrorist act and its not justified by any amount of sane logic.
Since the vandal and trespasser aims to intimidate providers and patients, then yes. It is Terrorism.
Why don't you just go through life telling anyone who asks that abortion providers face no risks and that the amount of violence against them is not that high. I'm sick of dealing with you loons.
So we are including vandalism and trespassing in the catagory of "terrorist attack"?
LOL
Why would that inform the argument when you cited the inevitable behavior of people that adopt an anti-abortion position? What that argument indicates a very large majority of the millions of people that adopt such a position should be out committing these acts of violence. But YOUR figures indicate the exact opposite
I'm sorry, you're not going to convince me graffiti should be categorized as 'terrorism.
Niether is killing the unborn.
Yes.
Both vandalism and trespassing disrupt the activities of the clinic, which is the goal of those terrorists - using violence to interfere with their normal operations
Vandalism and trespassing often involve more than graffiti. You argument is dishonest
Not truw
What that argument indicates is that someone will commit an act of violence
It shows here that 25% of all clinics have experienced severe violence. I think that's significantly higher than most other fields.
National Clinic Access Project - Clinic Surveys - Feminist Majority Foundation
vandalism and trespassing are not classified as violent crimes
while there certainly are other forms of vandalism, graffiti would also be classified as vandalism
No, he wrote the logical result of 'religious brainwashing" and listening to people describe the fetus as a baby is a person going out and doing things like bombing an abortion clinic. And it's the same position he has been defending since he made it
What that argument indicates a very large majority of the millions of people that adopt such a position should be out committing these acts of violence.
I looked into the 2010 report your Feminist Majority Foundation did and found reason for concern.
First of all, the basic math does not add up. They list 357 respondants to their violence and harassment survey. They claim that 23.5% of the respondants reported "severe violence," which woud be 84 out of 357 respondants. Severe Violence they defined as one of 11 possible things, and they listed the percent of respondants who claimed to be victims of an act of each type of "Severe Violence." The basic incongruency is this: there is no possible way that the numbers they list can equal up to 23.5% of the 2010 respondants they say claim to be victim of a severe violent act. If you do that math and even generously assume that all categories are mutually exclusive (each responding facility only was listed in one category [the report explicitly states this is not the case]), the maximum total number of facilities which could have reported an act of severe violence is only roughly 14.5% of respondants (or 72 clinics). It is worse, however, as the report mentions that there was a concentration of violence in a small number of clinics (so one clinic could have acts listed from 2+ categories). This means that several clinics are counted more than once in the aggragate sum of percentages (14.5% or 72 clinics). So, the actual number of clinics that have been the target of "severe violence" as defined in the study is likely a good deal less than even the 14.5% aggragate. There is literally no possible way, given the statistics they provide, that 84 clinics (or 23.5% of responding clinics) could have reported to be the target of an act of "severe violence."
On an ethical level, the representation of the data in the initial page you present is trash. The graph's title says it represents the "Percentage of Clinics Experiencing Severe Violence." That is extremely misleading. It is actually the percentage of responding clinics that reported experiencing severe violence. Roughly 40% (238) of the clinics did not respond to the 2010 survey. While one might think it safe to generalize the percent found by responders to the nonresponding clinics, it could easily be the case that the reason so many clinics did not respond was because they were not the target of any violent attacks (we call this a "selection effect"). This alternative explanation is supported by the evidence in the most recent report that the number of clinics reporting that they are not the target of any violence is increasing rapidly. If you run the numbers considering the non-respondants, only about 12% of all clinics could have responded that they were the target of an act of "severe violence." However, taking into consideration the summation issue addressed in the previous paragraph, the actual percent of clinics who reported they were the target of severe violence is lower than that. This indicates that the title of the graph would lead people to believe that the actual percent of clinics reporting at least one act of severe violence is over twice what the actual percent probably is.
Combining the shady number crunching with the misrepresentation of those numbers leads me to believe that the people conducting and reporting this study are scientifically incompetent at best, or ethically bankrupt and agenda driven at worst. Considering that these are probably professionals putting the study togather, I think the more likely explanation is the latter.
tl;dr
I looked into the 2010 report your Feminist Majority Foundation did and found reason for concern.
First of all, the basic math does not add up. They list 357 respondants to their violence and harassment survey. They claim that 23.5% of the respondants reported "severe violence," which woud be 84 out of 357 respondants. Severe Violence they defined as one of 11 possible things, and they listed the percent of respondants who claimed to be victims of an act of each type of "Severe Violence." The basic incongruency is this: there is no possible way that the numbers they list can equal up to 23.5% of the 2010 respondants they say claim to be victim of a severe violent act. If you do that math and even generously assume that all categories are mutually exclusive (each responding facility only was listed in one category [the report explicitly states this is not the case]), the maximum total number of facilities which could have reported an act of severe violence is only roughly 14.5% of respondants (or 72 clinics). It is worse, however, as the report mentions that there was a concentration of violence in a small number of clinics (so one clinic could have acts listed from 2+ categories). This means that several clinics are counted more than once in the aggragate sum of percentages (14.5% or 72 clinics). So, the actual number of clinics that have been the target of "severe violence" as defined in the study is likely a good deal less than even the 14.5% aggragate. There is literally no possible way, given the statistics they provide, that 84 clinics (or 23.5% of responding clinics) could have reported to be the target of an act of "severe violence."
On an ethical level, the representation of the data in the initial page you present is trash. The graph's title says it represents the "Percentage of Clinics Experiencing Severe Violence." That is extremely misleading. It is actually the percentage of responding clinics that reported experiencing severe violence. Roughly 40% (238) of the clinics did not respond to the 2010 survey. While one might think it safe to generalize the percent found by responders to the nonresponding clinics, it could easily be the case that the reason so many clinics did not respond was because they were not the target of any violent attacks (we call this a "selection effect"). This alternative explanation is supported by the evidence in the most recent report that the number of clinics reporting that they are not the target of any violence is increasing rapidly. If you run the numbers considering the non-respondants, only about 12% of all clinics could have responded that they were the target of an act of "severe violence." However, taking into consideration the summation issue addressed in the previous paragraph, the actual percent of clinics who reported they were the target of severe violence is lower than that. This indicates that the title of the graph would lead people to believe that the actual percent of clinics reporting at least one act of severe violence is over twice what the actual percent probably is.
Combining the shady number crunching with the misrepresentation of those numbers leads me to believe that the people conducting and reporting this study are scientifically incompetent at best, or ethically bankrupt and agenda driven at worst. Considering that these are probably professionals putting the study togather, I think the more likely explanation is the latter.
I stopped reading when he claimed that 14.5% of 357 = 72
Thank you for this thoughtful analysis. I'm one of those who tends to gloss over numbers /accept them at face-value, so I appreciate your effort. I shouldn't be so credulous.
Ha, that is hilarious! Must have been a math error. There are clearly 20.1% aggragate percent of Severe Violent Crime in the statistics chart (which does roughly equal the 72 number he mentioned). And his argument, sans that 14.5% number, appears sound. Still interesting the difference between the 20.1% shown in the graph and the 23.5% claim. That would indicate that severe violent crime has held relatively steady among reporting clinics over the last 10 years and gets rid of the uptick at the end. That would also increase the difference displayed in his second argument.
That's not the only mistake you made.
The 23.5% claim? Take a look at the graph and ask yourself why you used only #'s from 2010.
IOW, you screwed up all over the place.
23.5% is straight from the graph and from the report; it isn't my fault that the number is unsupported by the other data in the report. I only used the #'s from 2010 for two reasons:
1) it is the most recent published report and most relevant to the situation we see today. The most current report should represent the most polished methodology of all the reports as ideally they would learn and adjust from any mistakes/errors in previous reports. It would be silly for someone like me to criticize them for mistakes in older reports that they have already corrected for in more recent reports.
2) Going through the reports takes time and I only have so much. Reviewing several reports would take considerably more time and unfortunately my real job has priority. I have made no remarks concerning the validity of previous reports and the assumption of validity is represented in several of my recent comments regarding trend data. If I am wrong in that assumption, please let me know.
Regarding your last point, that I "screwed up all over the place:" please be specific. Other than a minor math error which had no significant effect on the conclusion of my analysis, what errors have I made? I believe I have provided enough information for an informed, critical dialogue beyond a simple "YOU ARE WRONG!" I took the time to critique the report. I was specific, detailed, and open concerning my analysis. If you are going to critique my analysis, I simply ask that you do the same. I would love to be 100% accurate in every little thing. Unfortunately, I only have so much time to double-check everything and I am only human. If there are as many mistakes in it as you indicate there are, I welcome the knowledge as I absolutely despise disseminating misinformation. Please use more than one sentence in your critique as I would like to understand how you believe each potential mistake specifically impacts the significance of my overall analysis. Basically, please give me more than: You made mistakes, the whole thing is crap!
The graph clearly refers to the 23.5% number being the result of attacks from 1993-2010. Therefore, if you're going to try to honestly debunk their claim, then you will have to use all of the reports going back to 1993.
If you don't have time to do that, then you shouldn't claim that they are wrong. It is dishonest.
And as far as detailing your mistakes, I have already done so. Do I really need to explain it as:
1) You left out the #'s for 2009
2) You left out the #'s for 2008
3) You left out the #'s for 2007
and so on and so forth back to 1993?
And as far you "taking the time", you just admitted that you did not take the time to do the job correctly
I am afraid you are gravely mistaken. To help me explain, I shall provide the graph in question:
View attachment 67154197
The design of the study is a longitudinal repeated measures survey. You see that last data point? 23.5%? That isn't an aggragate data point from the last 15 years, that point represents the data from the latest survey period (2010). If you read the report thoroughly, it clearly explains that graph.
Umm, no
"Longitudinal study" means it aggregates the data over the entire course of the time period.
PS - your link doesn't work
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?