I chose the title Physics from the Edge because the theory of inertia I have suggested (MiHsC) assumes that local inertia is affected by the far-off Hubble-edge. My webpage is here and I've written a book on inertia and MiHsC which is also called Physics from the Edge.

Monday, 23 February 2015

Debate on the facts.

Over the weekend I put up on wikipedia some lines on my peer-reviewed publications on MiHsC and how it can solve the galaxy rotation problem...etc. As has happened a few times before, it was all deleted by someone anonymous.

Instead of ranting about it, it's best to deal with things on the facts, so here is a summary of all the real problems MiHsC solves without needing any extra dimensions, huge amounts of invisible (dark) stuff, or any adjustable parameters either. MiHsC does it with a simple model for inertia, with a solid physical model behind it. A model that defines the inertial mass (mi) of an object to be

where m is the normal inertial mass, c is the speed of light, |a| is the magnitude of the acceleration of the object relative to other objects, and T is the Hubble scale. All these quantities are well defined and well known, so there is no scope in MiHsC for 'tuning' such as is done freely with dark matter, and even in MoND with a0, and yet without tuning this MiHsC formula predicts the following anomalous observations, within their uncertainties:

The mass of the cosmos (solving the flatness problem), the low-l CMB anomaly (unexpected smoothness on cosmic scales), the dynamics of galaxy clusters without dark matter (MoND cannot do this), galaxy rotation without dark matter, the observed minimum mass of dwarf galaxies, the strange (un)bound orbit of Proxima Centauri, the Pioneer anomaly (far more simply than the 'accepted' Byzantine thermal solution), the flyby anomalies, the Podkletnov and Tajmar anomalies, the EmDrive, and it explains for the first time the phenomenon of inertial mass itself (only 0.1% of which is explained by the Higgs mechanism). An introduction to MiHsC is here.

Mainstream theoretical (astro)physics, lost in Plato's realm, has lost the ability to debate issues on the facts, and has resorted to rearranging invisible entities and deleting new ideas online, but this is a symptom of its bankruptcy.

Friday, 13 February 2015

MiHsC vs EmDrive data: 3d

The EmDrive is a fascinating anomaly. It consists of a truncated metal cone (cavity) with a magnetron inside that inputs EM radiation with the same wavelength as the size of the cavity (it's just like a loudspeaker-shaped microwave oven). It has been shown by three different groups (UK, China, US) that when a resonance is achieved the cavity moves slightly towards its narrow end in apparent violation of the conservation of momentum, since there is no expelled mass to cause this. There was a suspicion that the movement was due to air currents, but NASA have just this last week shown that the same thing happens in vacuo.

In previous blogs I showed that MiHsC predicts the EmDrive thrust reasonably well, if it is assumed that photons have inertial mass which is caused by Unruh radiation whose wavelengths must fit inside the cone. MiHsC predicts that more Unruh waves fit in at the wide end of the EmDrive, so for photons traveling along the axis they always gain mass going towards the wide end and lose it going the other way. This is equivalent to expelling mass towards the wide end, so the cavity moves towards its narrow end to conserve momentum.

The equation I derived to apply MiHsC to the EmDrive setup (and published here) was extremely simple and didn't take account of the 3-dimensional nature of the cavity. I have now worked out a short-cut way to calculate the effect in 3-d, and I've re-derived the equation I had before for the thrust (F), which was

F = PQL/c * (1/wb - 1/ws)                                   (1)

so that in 3-d it is now

F = 6PQL/c * ( 1/(L+4wb) - 1/(L+4ws) )             (2)

where P is the power input (in Watts), Q is the Q-factor (number of bounces of a typical photon inside the cavity, L is the axial cavity length, c is the speed of light, ws and wb are the diameters of the small and big ends of the truncated cone. I have now applied Eq. 2 to the data I had before, and added the new vacuum result from NASA. I've put everything into Table 1 which shows results, row by row, for the two Shawyer (2008) tests (denoted Sa and Sb), the Cannae Drive test (Ca), the Chinese Juan et al. (2012) tests (J1 and J2), the NASA tests of Brady et al. (2014) (Ba, b, c) and the recent NASA vacuum test (Bv).

Expt     Q       Power     Freq'     wb        ws             L     Observed     MiHsC
                      Watts      GHz       cm        cm          cm     (----milliNewtons----)

S a      5900     850       2.45       16        12.750   15.6          16           3.84
S b    45000   1000       2.45       28        12.890   34.5      80-214   148
C a     1.1e7       10.5    1.047     22        20            3.0            9           7.34
J 1     32000   1000       2.45       28        12.89     34.5        214       106
J 2     50000   1000       2.45       28        12.89     34.5        315       165
B a      7320       16.9    1.933     27.94    15.88    22.86          0.09      0.23
B b    18100       16.7    1.937       "           "             "               0.05      0.57
B c    22000         2.6    1.88         "           "             "               0.06      0.11
B v      6730       50       1.937       "           "             "               0.03      0.64
Thanks to Dr J. Rodal for pointing out some errors in the NASA cavity's dimensions.

The results show a correlation between the observed thrusts (column 8) and the predictions of MiHsC (column 9), but with varying degrees of success: for the smaller NASA thrusts the error ratio is between 2 and 12, for the larger thrusts it is between 2 and 4. So, as usual, these results are interesting enough to record, but nothing yet to write to Nature about. Note that for J1 and J2 I've had to assume that their cavity dimensions were the same as those for Shawyer b, since their geometry was not documented. If anyone knows the dimensions, or notices any errors in the Table please do let me know! A summary of the Table is shown as a log-log plot below (I had to use a naughty log-log plot to separate the tiny NASA values).

Some of the discrepancy between MiHsC and the data could be due to an observation pointed out by Bob Ludwick that I read on a website somewhere: the Chinese noted that the correct parameter to use is not the Power P, but the power within the resonant bandwidth of the EmDrive, which is harder to calculate.

Some of my previous blog entries about MiHsC and the emdrive are here and here. My paper is here.


Brady, D., et al., 2014. Anomalous thrust production from an RF test device measured on a low-thrust torsion pendulum. Conference proceedings, see Table page 18. Link

Juan, W., 2012. Net thrust measurement of propellantless microwave thrusters. Acta Physica Sinica, 61, 11. 

McCulloch, M.E., 2015. Can the EmDrive be predicted by quantised inertia? Progress in Physics, 11, 1, 78-80. Link
Shawyer, R., 2008. Microwave propulsion - progress in the emdrive programme. Link. (see section 6, page 6).

Saturday, 7 February 2015

A shape is just a shape

I do not go looking for controversy, but the interesting areas that I get attracted to (anomalies) are often those areas where scientific taboos have put up warning signs. One example of breaking a different kind of taboo is the paper I have just published.

It all began at the end of 2004 when I went to South Korea and noticed that there were swastika signs next to the country roads. The swastika means Buddhist temple over there, proving that in some cases Bertrand Russell was right and 'Sin is geographical'. While Germany bans the swastika, Koreans associate it with Buddhism and peace. Travel broadens the mind, and we should remember that in physics, a shape is just a shape. Anyway, seeing all these signs, I had the idea for a new way of generating energy from ocean waves, using a swastika-rotor. This is illustrated by the Figure:

The swastika, or British fylfot if you wish, is centred on a axle (black circle) connected to a dynamo and sits in a random wave field. In the inner square areas (eg: A and B) there are fewer waves because of a sheltering effect, and because fewer wavelengths fit between the solid arms (to exist they need a node at the walls) so there is also a seiche effect. If we take the inner part of the southeast arm there is no net wave impact force on it because there are the same intensity of waves on both sides, but if we look at the outer part of the arm between areas B and C there are waves to the east of it banging into it and pushing it to the west, but no waves to the west of it pushing it east, so the net effect is a force (the arrow) pushing the arms of the swastika clockwise, and generating rotation and electricity with the dynamo.

This can be applied to ocean waves of course, but why not also to other kinds of waves or disturbances? For example: sound waves, Brownian motion and, last but definitely not least, the zero point field? Could we generate 'free' energy from the zpf this way? The rotors would have to be nano-scale though, so I don't see yet how this process could be scaled up..

After 8 years of procrastinating, and experiments with Lego, I wrote a paper and started submitting it to journals. I submitted it to three journals, and at all of them I had a long wait, and in each case the paper wasn't rejected, but it just never got reviewed so I had to withdraw it, and resubmit it to another journal. It was odd (I do wonder whether the politics stymied it?). Anyway, I have now published it in the fourth journal I sent it to (see the reference below). The next thing I intend to do is apply for funding to test this in Plymouth University's new wave tank and investigate the zpf angle theoretically.

Note the connection with MiHsC in that the arms of the shape are acting as a kind of horizon for the waves..


McCulloch, M.E., 2015. Energy from swastika-shaped rotors. Progress in Physics, 11, 2, 139-140. PDF

Monday, 2 February 2015

Empirical Falsification & Alpha Centauri

I strongly agree with Karl Popper's philosophy of empirical falsification: 'A theory in the empirical sciences can never be proven, but it can be falsified, meaning that it can and should be scrutinised by decisive experiments. If the outcome of an experiment contradicts the theory, one should refrain from ad hoc manoeuvres that evade the contradiction merely by making it less falsifiable'.

It's clear that much of modern physics, for example the search for gravitational waves and dark matter, are complete reversals of this sensible approach. The search for gravitational waves is an attempt to 'prove' general relativity, not to disprove it. Similarly for dark matter: when general relativity was first applied to other galaxies it was found to wrongly predict their rotation, so invisible dark matter was invented, huge amounts of it, to make general relativity fit. This is clearly an 'ad hoc manoeuvre to evade the contradiction that makes the whole system less falsifiable', since dark matter is added by hand and cannot be disproven. Since then, mind boggling sums of money have been spent building detectors to look for dark matter, in disregard of the sensible Popperian approach which would design decisive experiments that attack general relativity.

What would be a decisive experiment or observation? Torsion balance tests of the equivalence principle (the basis of general relativity, GR) are not decisive, because a theory now exists that explains galaxy rotation and violates equivalence, but would not show up in such experiments (MiHsC). One decisive way to attack GR would be to look at a very low acceleration system that cannot be explained by dark matter.

For example, I have discussed globular clusters, wide binary stars and the Alpha Centauri system before and I have now completed a nice paper on the latter (to be submitted). Alpha Centauri is the closest star system to us. It is a triple star system with two stars very close together and one extremely far away and in the 'low acceleration' regime. Sure enough, the far star (Proxima Centauri) is orbiting far too fast to be bound by the visible matter of the other two, and yet it is definitely bound because it has the same motion through the sky and the same chemistry as the others. This may sound oddly familiar! It is a decisive anomaly because it sounds just like the galaxy rotation problem, but dark matter cannot be applied on these small scales. One 'fix' that has been inevitably suggested is to increase the mass of the two central stars, by 3-sigma, a large increase over their mass uncertainty, so not ideal.

I've now shown that MiHsC predicts a loss of inertia for Proxima, so that it can be bent into a bound orbit with the observed fast speed. This means that MiHsC reconciles the chemical, co-moving and orbital data without the need for any 'fiddling', and dark matter and additions of normal matter can't work in this case.


McCulloch, M.E., 2015. Testing quantised inertia on the Alpha Centauri system (to be submitted).

Wertheimer, J.G., G. Laughlin, 2006. Are Proxima and Alpha Centauri gravitationally bound? Astronomical Journal, 132, 1995-1997.

Wednesday, 21 January 2015

Cultural Inertia

I can't understand how physicists can be content with such poor explanations of nature as dark matter and dark energy. Of course, there are some examples from history where new mass was implied: one student yesterday pointed out to me that Mendeleev's periodic table predicted elements that were then seen later, which is a good example, but in the successful cases like Mendeleev's, or the discovery of Neptune, it was a small bit of extra mass that was needed to fill the gap in a mostly complete structure, interpolation, in the case of dark matter and dark energy physicists are saying they understand 4% of the cosmos and are now extrapolating 96% of it. The reason I think is cultural inertia, which always has been very strong.

One example of cultural inertia was the ancient Greek Aristotle's belief that the Moon was perfectly round. Nearly 2000 years later in 1609 Galileo made a telescope (which had just been invented by Hans Lippershey) and looked at the Moon and saw jagged mountains on it! I can imagine his joy at this discovery, since he was curious and also a bit of a Socratian gadfly but his contemporaries said 'No, the Moon is perfectly round just as the great Aristotle said'. 'How so?', said Galileo. They replied 'It is surrounded by an invisible crystalline layer that is a perfect sphere'. 'Go on, pull the other leg!' said Galileo 'No, really!' they said. I can imagine Galileo's frustration at having his observations countered by a theory that was so ridiculous it was not falsifiable - how can you disprove an invisible layer around the Moon? He could only resort to ridicule and replied 'If you can imagine an invisible layer, then I say there are mountains in the invisible layer 10 times bigger than the ones I can see through my telescope'.

In modern times, old theories die hard just the same: general relativity (GR) was suggested by Einstein in 1915, and even in 1940 he knew it was not the final word (see Feynman, 1985, page 80), since he had a genuine desire to know. It is true that GR has been tested successfully at the high accelerations in our inner Solar system (of order 1 m/s^2), but it has not been tested at the very low accelerations in galaxies (of order 10^-10 m/s^2) that Einstein never imagined, but we have now seen. I wrote in a previous blog that 'no theory has ever survived an extrapolation over ten orders of magnitude' and I think this is vague but probably mostly true, and, surprise surprise, at the edges of galaxies where accelerations are ten orders of magnitude lower, GR fails, and ten times as much mass as can be seen must be added to fix it. Guess what? This mass is invisible. In this case though, there a way to disprove dark matter, and that is that tiny globular clusters behave anomalously just like huge galaxies, but dark matter can't be use to fix them because to be smooth enough to work on galactic scales they can't also work on those small scales. See my blog here.

Alternatively one can modify physics in such a way that it doesn't mess up well-observed high acceleration behaviour, but also fits the new low acceleration data. This is what I have done with MiHsC (see a summary here) which fixes things without needing any invisible stuff, and in fact MiHsC is based on the philosophy that 'if you can't observe it in principle, then it doesn't exist', which was the same kind of Machian approach that led Einstein to relativity.

Cultural inertia is very strong and keeps the majority comfortable, but I think, every hundred years or so, even theoretical physicists are entitled to a bit of excitement.


Feynman, R.P., 1985. Surely You're Joking Mr Feynman, 1985. Vintage books.

McCulloch, M.E., 2014. Physics from the Edge, World Scientific. Book

Monday, 12 January 2015

Time from interaction?

Imagine a firefly drifting in an empty universe that can only do three things: remember, detect photons and emit photons. If you follow the principle of Mach that 'if it can't be measured in principle, then it doesn't exist', as I do, then as far as the lonely firefly is concerned there is no time or space since it has no way of measuring them. It could emit a photon of light to try and explore its environment, but the photon will never come back so neither will any information. My intention here is not to make spacetime subjective but to apply the same idea to inanimate objects, and say that if time/space are fundamentally unmeasurable by a system then these abstract quantities don't exist.

Now imagine that suddenly another firefly appears and there are two things in the universe. Now firefly A can emit light and firefly B can respond to A with its own flash. Suddenly A and B have a way to measure time. They can't do this by measuring the time taken for a signal to return because we've already assumed that time without the return of a signal doesn't exist, but if A and B have memories then they can count the number of times they receive a reply and call this time. This begs the questions: does time only exist with interactions? Does it speed up if you have more interactions? I think so, because this suggests a way to resolve the Einstein, Podolsky & Rosen (1935) (EPR) paradox.

In the EPR problem there is a particle with zero spin that splits into two particles, one going left, one right. Quantum mechanics, not to be pinned down, only says that both are spinning both up and down, but if someone measures particle A and finds it spinning up, by conservation of angular momentum we know immediately that particle B must spin down. Since quantum mechanics says there was no information on spin before the particles were measured, and Bell's inequality has allowed people to experimentally confirm this, then this implies that A and B communicate apparently faster than light, in violation of special relativity.

Well, I'd like to suggest these particles are a bit like the fireflys: while they're diverging they can't interact with anything, and so, as above, time cannot exist for them, so at the time (from our external point of view) that they seperated they already knew what would happen at the later measurement time (Being complex beings we have lots of interactions going on so we have a finer measure of time). I've been vaguely thinking this for years (inspired just after my physics degree by reading the Emperor's New Mind by Roger Penrose), but recently I've got stuck in and I've finally worked out a way to justify and quantify this using information theory. I am just about to submit a paper on it..

PS: The brilliant Transaction Interpretation of Quantum Mechanics of Cramer (1986) says something similar, but involves waves sending signals 'through' time rather than, as here, having time itself dissapear.

PPS: The bleak but deep novel by Greg Bear 'The City at the End of Time' involves a sort of collapse of time so different events in history suddenly end up simultaneous.


Einstein, A., B. Podolsky, N. Rosen, 1935. Can quantum mechanical description of physical reality be considered complete? Phys. Rev., 41, 777.

Cramer,  J., 1986. Reviews of Modern Physics, 58, 647-688.

Penrose, R., 1989. The Emperor's New Mind.

Friday, 2 January 2015

Bell's Anomaly

As you know by now I'm always in search of anomalies, and probably the deepest anomaly in physics today was first noticed by Einstein, Podolsky and Rosen (1935) (hereafter EPR) who introduced it as a paradox. John Bell (1964) brilliantly quantified it and made it possible for Aspect (1982) to test it and turn it into an anomaly which proves that physics as we have known it, is not deep enough. It also offers possibly the best clue to progress.

Anyway, to put things into context Einstein discovered one half of modern physics: special relativity, which maintains that information travels only at light speed and he also had a hand in creating the other half: quantum mechanics, which says that any quantum system is in an indeterminate state (wavefunction) until it is measured, like Schroedingers 'cat in a box with poison' which is neither alive nor dead until it's seen.

The original EPR paradox implied that these two halves of modern physics are incompatible. It starts by imagining a non-spinning particle splitting into two entangled particles with spin one half. To conserve angular momentum, one must be spin up and the other down. As they zoom away from each other they are in a combined spin up and spin down state, like the dead-alive cat. Imagine you let them get light years apart and then decide to measure the spin of one of them and the particle suddenly decides to be clockwise (collapse of the wavefunction). The conservation of momentum then tells you suddenly that the other particle is spinning anticlockwise, whereas just before it was doing both. Einstein didn't like this because no definite information on spin was encoded in the wavefunction, so how could the second particle know which way to spin, does this information pass between them upon measurement of the first as what he called 'spooky' action at a distance? He thought there must be a sort of invisible DNA inside the particles that encodes information about spin, and that quantum mechanics just doesn't know about these 'hidden variables' yet.

This was interesting as a paradox, but not testable. John Bell (1964) brilliantly made it quantitative and therefore testable. He calculated the probability of correlation in spin between two entangled diverging particles, one measured at a place A at one angle and one measured at a place B at another angle. He calculated this in two ways. First by assuming hidden variables (that the two particles really do have proper physical properties encoded all the time) and this predicts that if the angle between your direction of measurement at A and B are 45 degrees the spins are 1/2=0.5 likely to agree. Secondly, he calculated the same correlation using quantum mechanics, assuming the particles only have real properties when measured, and then he predicted for the same angle a 1/sqrt(2)=0.71 correlation (these numbers depend on the kind of experiment you do, but are, crucially, different at this skew angle!).

All that remained was to do such an experiment and it was done by Aspect et al. (1982) & others. They found by looking at many correlations between photon polarisations (another non-spin way to do the same thing) that the results were consistent with quantum mechanics and not Einstein's hidden variables.

What this means is that if one wants to maintain the idea that there is some physical reality out there that does not depend on the observer, as I think we must (this is called realism) and also maintain free will (so it's not the case that the cosmos knows everything before it happens, in which case: what is the point?) then you must admit that the particles are somehow communicating faster than light, or through time (Cramer, 1986) and therefore these well-observed quantum mechanical experiments are not consistent with special relativity, and the two halves of standard physics do not fit together. I think this anomaly is pointing the way to a complete rewrite of the fundamentals of our fragmented physics, with the finger pointing towards time.

If a kingdom be divided against itself, that kingdom cannot stand - J. Christ (Mark 3:24)


Einstein, A., B. Podolsky, N. Rosen, 1935. Can quantum mechanical description of physical reality be considered complete? Phys. Rev., 41, 777

Bell, J., 1964. On the Einstein, Podolsky, Rosen paradox., Physics, 1, 195.

Aspect, A., R. Dalibard, 1982. Experimental test of Bell's inequality using time varying analyzers. Phys. Rev. Letters, 49, 25, 1804.

Cramer,  J., 1986. Reviews of Modern Physics, 58, 647-688 Link