<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: The Value Of Association Value</title>
	<atom:link href="http://popsych.org/the-value-of-association-value/feed/" rel="self" type="application/rss+xml" />
	<link>http://popsych.org/the-value-of-association-value/</link>
	<description>The Internet&#039;s Best Evolutionary Psycholo-guy</description>
	<lastBuildDate>Wed, 03 Jan 2018 01:05:13 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.4.2</generator>
	<item>
		<title>By: Peter Gerdes</title>
		<link>http://popsych.org/the-value-of-association-value/#comment-1049</link>
		<dc:creator>Peter Gerdes</dc:creator>
		<pubDate>Thu, 22 Sep 2016 18:10:10 +0000</pubDate>
		<guid isPermaLink="false">http://popsych.org/?p=5487#comment-1049</guid>
		<description>You are far too quick to throw away evolutionary explanations for why we might tend not to believe things which make us feel bad about ourselves.  True, if one designed humanoid robots for maximum reproduction one probably wouldn&#039;t include such a function but that isn&#039;t how evolution works.

I certainly agree that we should be sucpiscous of any theory that calls for a robust general process by which we maintain our self-image against conflicting data.  However, this in no way suggests that there aren&#039;t circumstances where we are evolved to `trick&#039; certain brain processes/methods.  The same way a messy programmer might decide that instead of making a fancy new method he will tweak the system by deliberately feeding false/adjusted data to the existing method as long as it fixes more problems than it causes.

Just so story 1:

It is beneficial to present as if one believes positive things about oneself as it will encourage others to view you as more valuable.  However, it is evolutionary difficult and computationally expensive to develop a full second set of books for every kind of self-knowledge (so you have the external story and the internal one).  The cheap version of merely saying you are awesome at everything or otherwise not behaving in the way you would if accurately reporting the internal version is detectable and would do no good.

The obvious evolutionary short cut in situations where one&#039;s own internal knowledge isn&#039;t that important is to simply compromise and bump up the internal confidence to convincingly sell the story to others.  One way evolution can make this happen is to layer on some kind of filter or avoidance mechanism that partially misleads your estimate of how much someone else likes you.

Just so story 2:

We are hardwired with a module that responds to fairly simple ques about friendship/relationship and this module is strongly wired to reward/punishment circuitry (reward for forming alliances etc..) however other aspects of our brain (more abstract focused on problem solving) may be able to realize at some level that the warmth we feel to person X doesn&#039;t really reflect the strength of that relationship and wanting to avoid the unpleasent feelings that result if the reward module finds out we delibrately avoid situations which would give it the simple kind of ques it responds to.  Obviously I am anthropomorphizing the brain a bit here and we don&#039;t `realize&#039; anything but simply reenforce pathways that cause us to avoid such situations.</description>
		<content:encoded><![CDATA[<p>You are far too quick to throw away evolutionary explanations for why we might tend not to believe things which make us feel bad about ourselves.  True, if one designed humanoid robots for maximum reproduction one probably wouldn&#8217;t include such a function but that isn&#8217;t how evolution works.</p>
<p>I certainly agree that we should be sucpiscous of any theory that calls for a robust general process by which we maintain our self-image against conflicting data.  However, this in no way suggests that there aren&#8217;t circumstances where we are evolved to `trick&#8217; certain brain processes/methods.  The same way a messy programmer might decide that instead of making a fancy new method he will tweak the system by deliberately feeding false/adjusted data to the existing method as long as it fixes more problems than it causes.</p>
<p>Just so story 1:</p>
<p>It is beneficial to present as if one believes positive things about oneself as it will encourage others to view you as more valuable.  However, it is evolutionary difficult and computationally expensive to develop a full second set of books for every kind of self-knowledge (so you have the external story and the internal one).  The cheap version of merely saying you are awesome at everything or otherwise not behaving in the way you would if accurately reporting the internal version is detectable and would do no good.</p>
<p>The obvious evolutionary short cut in situations where one&#8217;s own internal knowledge isn&#8217;t that important is to simply compromise and bump up the internal confidence to convincingly sell the story to others.  One way evolution can make this happen is to layer on some kind of filter or avoidance mechanism that partially misleads your estimate of how much someone else likes you.</p>
<p>Just so story 2:</p>
<p>We are hardwired with a module that responds to fairly simple ques about friendship/relationship and this module is strongly wired to reward/punishment circuitry (reward for forming alliances etc..) however other aspects of our brain (more abstract focused on problem solving) may be able to realize at some level that the warmth we feel to person X doesn&#8217;t really reflect the strength of that relationship and wanting to avoid the unpleasent feelings that result if the reward module finds out we delibrately avoid situations which would give it the simple kind of ques it responds to.  Obviously I am anthropomorphizing the brain a bit here and we don&#8217;t `realize&#8217; anything but simply reenforce pathways that cause us to avoid such situations.</p>
]]></content:encoded>
	</item>
</channel>
</rss>
