The sad truth is that "non-fiction" has been unreliable from the beginning, no matter how finely grained a section of human knowledge we wish to consider. For instance, in my own field, critics have tried to replicate the findings in academic journal articles by economists using the initial data sets. Usually, it is impossible to replicate the results of the article even half of the time. Note that the journals publishing these articles often use two or three referees--experts in the area--and typically they might accept only 10 percent of submitted papers. By the way, economics is often considered the most rigorous and the most demanding of the social sciences.
You can knock down the reliability of published research another notch by considering "publication bias." Publication bias refers to how the editorial process favors novel and striking results. Sometimes novel results will appear to be true through luck alone, just because the numbers line up the right way, even though the observed relationship would not hold up more generally. Articles with striking findings are more likely to be published and later publicized, whereas it is very difficult to publish a piece which says: "I studied two variables and found they were not much correlated at all." If you adjust for this bias in the publication process, it turns out you should hardly believe any of what you read. Claims of significance are put forward at a disproportionately and misleadingly higher rate than claims of non-significance.