<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: Does &#8220;Statistical Significance&#8221; Imply &#8220;Actually Significant&#8221;?</title>
	<atom:link href="http://popsych.org/does-statistical-significance-imply-actually-signifiant/feed/" rel="self" type="application/rss+xml" />
	<link>http://popsych.org/does-statistical-significance-imply-actually-signifiant/</link>
	<description>The Internet&#039;s Best Evolutionary Psycholo-guy</description>
	<lastBuildDate>Wed, 03 Jan 2018 01:05:13 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.4.2</generator>
	<item>
		<title>By: Statisticial Issues In Psychology And What Not To Do About Them &#124; Pop Psychology</title>
		<link>http://popsych.org/does-statistical-significance-imply-actually-signifiant/#comment-513</link>
		<dc:creator>Statisticial Issues In Psychology And What Not To Do About Them &#124; Pop Psychology</dc:creator>
		<pubDate>Sat, 23 Feb 2013 22:16:54 +0000</pubDate>
		<guid isPermaLink="false">http://popsych.org/?p=940#comment-513</guid>
		<description>[...] I&#8217;ve discussed previously, there are a number of theoretical and practical issues that plague psychological research in terms [...]</description>
		<content:encoded><![CDATA[<p>[...] I&#8217;ve discussed previously, there are a number of theoretical and practical issues that plague psychological research in terms [...]</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: The Sometimes Significant Effects Of Sexism &#124; Pop Psychology</title>
		<link>http://popsych.org/does-statistical-significance-imply-actually-signifiant/#comment-391</link>
		<dc:creator>The Sometimes Significant Effects Of Sexism &#124; Pop Psychology</dc:creator>
		<pubDate>Sun, 25 Nov 2012 01:48:57 +0000</pubDate>
		<guid isPermaLink="false">http://popsych.org/?p=940#comment-391</guid>
		<description>[...] Post navigation &#8592; Previous [...]</description>
		<content:encoded><![CDATA[<p>[...] Post navigation &larr; Previous [...]</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Leigh Caldwell</title>
		<link>http://popsych.org/does-statistical-significance-imply-actually-signifiant/#comment-389</link>
		<dc:creator>Leigh Caldwell</dc:creator>
		<pubDate>Tue, 20 Nov 2012 14:51:34 +0000</pubDate>
		<guid isPermaLink="false">http://popsych.org/?p=940#comment-389</guid>
		<description>One of the authors, Simonsohn, also just presented at the SJDM conference a clever method for detecting what he calls &quot;p-hacking&quot;, the collective name for the p-value-reducing practices mentioned above. 

It doesn&#039;t work on single papers but can be used across all the papers in a group: say, all papers on the endowment effect or all by a particular author. The technique simply requires drawing a histogram of all the p-values in all the papers.

If the papers are describing real effects and conducted correctly, we would expect to see more p=0.01 than p=0.02 and more p=0.03 than p=0.05 - the histogram will be skewed towards zero. If there is no effect and the results are purely due to chance, we&#039;d see a flat graph. And if there is a lot of p-hacking, we&#039;d see it skewed the other way, towards the 0.05 end. 

He analysed a couple of groups of papers (based on specific keyword criteria) and found that p-hacking could indeed be detected in certain bodies of work. Clever technique and very practical at the meta-level to see whether the overall research in a particular field can be relied on.</description>
		<content:encoded><![CDATA[<p>One of the authors, Simonsohn, also just presented at the SJDM conference a clever method for detecting what he calls &#8220;p-hacking&#8221;, the collective name for the p-value-reducing practices mentioned above. </p>
<p>It doesn&#8217;t work on single papers but can be used across all the papers in a group: say, all papers on the endowment effect or all by a particular author. The technique simply requires drawing a histogram of all the p-values in all the papers.</p>
<p>If the papers are describing real effects and conducted correctly, we would expect to see more p=0.01 than p=0.02 and more p=0.03 than p=0.05 &#8211; the histogram will be skewed towards zero. If there is no effect and the results are purely due to chance, we&#8217;d see a flat graph. And if there is a lot of p-hacking, we&#8217;d see it skewed the other way, towards the 0.05 end. </p>
<p>He analysed a couple of groups of papers (based on specific keyword criteria) and found that p-hacking could indeed be detected in certain bodies of work. Clever technique and very practical at the meta-level to see whether the overall research in a particular field can be relied on.</p>
]]></content:encoded>
	</item>
</channel>
</rss>
