Saturday, May 24, 2014

Pass / Fail Mentality



Recently, I was talking about calibration (here and here), and how it should be more than just identifying the most likely cause of the output of a measuring instrument. The calibration process should strive to characterize the range of true conditions that might produce such an output, along with any major asymmetries (bias) in the relationship between the truth and the instrument's reading. In short, we need to identify all the major characteristics of the probability distribution over states of the world, given the condition of our measuring device.

Failure to go beyond simply identifying each possible instrument reading with a single most probable cause is a special case of a very general problem that in my opinion plagues scientific practice. Such a major failure mode should have a special name, so lets call it 'pass / fail mentality.' It is the most extreme possible form of a fallacy known as spurious precision, and involves needlessly throwing away information.

Tuesday, May 20, 2014

Announcing: Moral Science Index




Continuing the paradigm established by my glossary and my mathematical index, I've put together an index to and summary of the material I've accumulated on the topic of moral science. The index can be reached here, or from the link, 'Moral Science', on the right-hand side, beneath my profile.

The idea is simply to provide a point of entry for people interested in knowing what I have to say on this topic. People can see everything I have presented on this theme, the order in which the different pieces were published (and hence, approximately their dependency), a short description of each piece's function, together with some global motivating and qualifying remarks. 

The relationship between science and morality represents a significant percentage of the material on my blog. It's an important (by definition) and highly overlooked topic, so I think it is important for people to have a single point of access to this material, the same way that the mathematical index provides a consolidated resource for learning about statistics, and the same way that the glossary represents the most definitive statement of my philosophy available, anywhere. (In some respects, I now view the blog as secondary to the glossary.)

I will try to keep the moral science index current - as I release more material, I'll update the index accordingly.

As always, I welcome your comments, questions, criticisms, outraged indignation, etc. If anything needs clarification, the fault is mine. If you're curious about some detail I can help with, then I'm delighted to do so (that's the whole point of the website, actually). Comments are open here and on the index itself, and alternative contact details exist on the right hand side of this page.


Some Highlights:

For your convenience, I'll reproduce here some of the major points from the moral science page. 

(1) As of the publication date of this blog post, the index stands at:
Blog entries on this topic (in order of publication):

Glossary entries on relevant concepts:

(2) To disclaim any extraordinary expertise in any specific realm of moral decision:
My writing on ethics is not to prescribe how to behave, but to inform on how to know how to behave.

(3) Quoting from the overview:
The founding principle behind my writing on this blog is that there is no better method to learn about anything than science. If a thing is meaningful - has consequences - science can measure it, by virtue of those consequences.... 

It is often said that science has nothing to say on the matter of what constitutes moral behaviour. If correct, this leaves us with only one option: morality has no meaning, it is a non-concept. It seems to me absurdly trivial that this is not so. Anyway, only a moderate amount of reflection is required to prove it. Thus, it is equally trivial to prove that science can guide us - in fact, is the optimal guide - concerning moral prescription.

(4) Another feature on the moral science page is a short list of blog articles I expect to write on the topic in the near future, covering (in no particular order):
  • the correspondence, if any, between correct consequentialism and classic utilitarianism
  • the correspondence, if any, between correct consequentialism and political libertarianism
(Spoiler alert: the answer in both cases is, not so much.)
  • some necessary aspects of the nature of human decision criteria
  • the limited insight offered by the classic thought experiments in the philosophy of ethics
  • the potential for correct moral realism to significantly reduce reliance on superstition, leading to a better informed and more rationally directed society 



Saturday, May 17, 2014

The Calibration Problem: Why Science Is Not Deductive



Here is perhaps the most important fact about scientific method that anybody can ever learn: the optimal course of a scientific investigation is to provide probability assignments for propositions about the universe, and when scientific method deviates from this optimum path, it is valid only to the extent that it successfully approximates this ideal. There is a simple reason for this:

We would love to be able to say that we are 100% certain about X, that Y is guaranteed to be true, or that fact Z about the universe has somehow entered my head and impressed infallible knowledge of its necessary truth on my mind, but of course, except for the most trivial propositions, none of these is possible.

Firstly, every measurement is subject to noise, so there will always be a degree of uncertainty about what caused a particular experience.

Secondly, and far more fundamentally, calibration of any instrument requires certain symmetries of physical law to be hypothesized. Here's what I mean:

Tuesday, May 6, 2014

Calibrating an X-ray Spectrometer - Spectral Distortion




Calibration is a process whereby a relationship is inferred between the output of some measuring instrument and the physical process responsible for that output. An instrument may be something as simple as a ruler, or something as complicated as the Human Genome Project or the Planck cosmic background survey. Calibration is fundamental to science. We might even say that it is science.

When we think about calibration, we often think simply about finding the most probable value for some physical parameter, given some reading from an instrument. In the previous part, I described this simple process for a device used to characterize the distribution of photon energies in a stream of x-rays.

But we really ought to think of calibration as more than this. To make the best inferences possible from a reading, we should formulate the entire probability distribution, not just the location of its maximum, for the state of the world when the machine goes "bing," or when the display reads "42." When the readout says 7, it's good to know that I've most probably just found a black hole (perhaps), but it's also good to know what alternative explanations there are, and what amounts of probability mass they command.

Saturday, May 3, 2014

Calibrating an X-ray Spectrometer - First Steps




Recently, I've been working with a borrowed piece of equipment - an x-ray spectrometer - whose response I need to understand, so I can take measurements with it. This is a special case of the general problem of calibration, which is a crucial topic in science, so I'd like to take some time to describe the procedure I went through. As you'll see later, the problem is not fully solved yet, which I suppose illustrates the trial-and-error nature of scientific work. Regardless of the degree of ultimate success, though, the process I'll describe strikes me as a fine illustration of the basic logic of experimental science.

Saturday, April 26, 2014

The Exponential Distribution




The exponential distribution holds a special significance for me. My PhD thesis was all about optical transients, the simplest mathematical models of which are exponential distributions. Currently, I work in x-ray science, which is heavily concerned with the depletion of an (x-ray) optical field as it traverses some distribution of matter (both in an object being imaged, and in the detector) - this time the exponential distribution is over space, rather than time, but the mathematics is the same.

Any kind of involvement with mathematical science quickly brings us into intimate contact with exponential functions, as these arise left, right, and centre, in the solutions of differential equations. The reason for this is related to the fact that the exponential is the only mathematical function that is its own derivative. This is closely related to a special property of the exponential distribution, known as memorylessness (what will happen next - its rate of change - is entirely governed by the current state). So let's take a quick look into how the exponential distribution comes about, and what its major characteristics are. 

Saturday, March 22, 2014

Whose confidence interval is this?




This week, yet again, I was confronted by yet another facet of the nonsensical nature of the frequentist approach to statistics. The blog of Andrew Gelman drew my attention to a recent peer-reviewed paper studying the extent of misunderstanding of the meaning of confidence intervals, among students and researchers. What shocked me, though, was not the only findings of the study. 

Confidence intervals are a relatively simple idea in statistics, used to quantify the precision of a measurement. When a measurement is subject to statistical noise, the result is not going to be exactly equal to the parameter under investigation. For a high quality measurement, where the impact of the noise is relatively low, we can expect the result of the measurement to be close to the true value. We can express this expected closeness to the truth by supplying a narrow confidence interval. If the noise is more dominant, then the confidence interval will be wider - we will be less sure that truth is close to the result of the measurement. Confidence intervals are also known as error bars.