I have friends who are non-science PhDs and absolutely believe anything published in a science journal must be absolutely sound.
Before I started reading I thought, "alas, its going to be pseudo-science again right?".... and as far as I can see, yes, basically it is.
1. Logical bits are not thermodynamic bits, so a thermodynamic analysis of the brain can only be compared to a thermodynamic analysis of CPUs (if one wishes to compare at all)
2. Every phrase which begins "suppose, assume, conjecture" introduces fatal assuptions into the whole project, extremely few are defensible. The idea that a single neurone is "computing" whilst its "communication" with others is non-computational in a logical sense is clearly false. One can make a thermodynmic distinction between energy "of the neurone" and "of thier 'communication'" this distinction has no relevance to a computational-logical model of the brain as a computational system.
3. The frequent reference to thermodynamic limits of "computation" (in a logical sense) as a baseline for comparison with (abitary) parts of the brain, is meaningless. The thermodynamic efficiency of the brain is interesting only insofar as any possible logical model of the brain seems to imply vastly more "computational resources". And vastly more thermodynamic resources than CPUs have. Physical limits on theoretical computation pertain, if they are even themsevles coherent (which is disputed), to the absolute minimal possible "piece" of reality, not even, i'd say, to any measurable phenomenon. As soon as a system has to engage in measurement, i'd say it would be millions+ times less "efficient" than 'physical limit's would suggest.
One "trick" to see through the pseudoscience of computational neurobiology is simply to apply its methods to actual CPUs and computers. With the above assumptions, this paper would conclude that only one operation in a transistor is "computing" anything, the energy required to do that is "the energy of computation"... and the rest of the energy used across CPU(-RAM-etc.) was "merely communication".
As-if all algorithms were logically specified as a purely parallel series of switch-flips. No, algorithms (in a computation-logic) sense have nothing to do with switch flips. And their implementation on digital computers requires many, serial and parallel and "thermodynamic communication" between them to perform the computation in question.
Your very valid points are - in my reading - addressed in this paper, albeit in a milder and more forgiving form, in the Results section under A Baseline for Maximally Efficient Computation.
A simplistic model relates physics to neuroscience.
Especially:
That is, physics looks at each computational element only as a solitary individual, performing but a single operation. There is no consideration that each neuron participates in a large network or even that a logical gate must communicate its inference in a digital computer in a timely manner. Unlike idealized physics, Nature cannot afford to ignore the energy requirements arising from communication and time constraints that are fundamental network considerations (43) and fundamental to survival itself (especially time) (18, 19).
I wouldn't call it outright pseudo-science as the authors are trying to impose some limited working framework with very technical meanings of "communication" and "computation" in this context. I too, find this "physics"/"cs" excursion very basic and questionable which kind of points to the purpose of that paper: demonstrating an exercise. And as a general symptom the difficulty to overcome the shallow waters of interdisciplinary fields/approaches.
[From my own experience in talking with biologists only after awhile I begin to appreciate the deep complexities of biological/biochemical systems, only to - after some time has passed - forget those subtleties again.
I'm so used to simplify from my physics background that it is actually quite hard to recognize important distinctions in different (more complex) fields which can lead to vastly different mathematical models.]
If this were one paper in isolation, I'd call it an honest attempt at good science gone-wrong by a lack of consultation with theoretical computer scientists and specialists in thermodynamics.
However, I think this is just *the entire field* -- which is then different. I think a whole field is pseudoscience when it systematically engages in the same games. I think likewise of much fMRI work,... all the way to basically the whole of nutritional science, etc.
If the premises of your project can be pretty quickly falsified by relevant domain experts, and you build a whole field out of it, you slip into my category of "pseudoscience".
This paper explicitly tries to prescribe to engineers (!!!) where their "focus" should be, having absolutely no warrant to do so. A trivial application of the same analysis to CPUs would provide the relevant baseline comparison the paper should have made. With this analysis, the key results of the paper would be exposed as useless.
I have little sympathy for all this now: it's a research hype cycle craze to apply random bits of computer science to random bits of science in the most hairbrained manner.
perhaps you should address this in a counter paper rather than an HN comment. researchers have to justify grant money and regularly twist the facts to accomplish this, that's just how the system operates. let the dregs drag and the Einsteins elevate. a large percentage of "research" quite rightly never sees the light of day. groundbreaking and globally impacting discoveries and inventions are only a few in a century
I think you're missing reading non-hard sciences. In science every analysis that clear states its assumptions and models is valid. You don't have to derive everything from axioms and physical laws. You can make assumptions and simplifying models and operate within that, as long as you clearly state your assumptions that may or may not hold -- that's fine. This is common in engineering papers.
> Logical bits are not thermodynamic bits
They are lower bounded by thermodynamic bits? (in the sense of energy for instance) The thermodynamic bits are exactly that: information (although it seems information in thermodynamic theory is still not perfectly well understood).
They arent lower-bounded by thermo bits --- because one can specify algorithms which require no thermodynamic work to implement. Logical bits and thermo bits are related by contingent facts of implementation. One has first to specify an algorithm (defined in terms of a computational model), then it's an open question as to what-and-how it'll be implemented.
It's also not at all clear that the physics terms "entropy, information, bits, etc." have anything to do with their computational "equivalents". Only by fairly strained thought experiments do we get alleged connections. Even these thought experiments only provide extremely limited translation of these terms between domains.
"Information" in a "logical" sense is a radically different think than in a "thermodynamic sense"... for example, the former has an obvious observer-independent definition, the latter does not.
The whole game of trying to bridge these notions without specifying implementation relations (etc.) is largely the new form of that transhumanism-craze: the respectable ideological space of delusional techno-utopian hopes.
That's pretty much useless, though. We know for a fact that the brain uses irreversible computational processes and that essentially all the input information is erased. So the lower bound is indeed valid. There is indeed a necessary connection between thermodynamic bits and logical bits. Indeed, since we know that we can assume irreversible computation, we know that the computation isn't dominated by those zero-work algorithms.
And indeed, if you minimally look at the neuronal model of computation, you can obviously see that computation is going to be irreversible (though reversible calculation is possible in theory).
Now, you're right that there is a lot of wiggle-room for implementation, but the lower-bound is indeed robust. So there is clearly value to the argument.
Any correspondence between formal properties of the algorithm, namely, the logical model of the system and its physical properties *requires* (1) the algorithm; and (2) the implementation model.
Speaking about "computational processes" and "reversibility" at all, absent these, is meaningless.
What exactly, of the brain is the "computational process" ? What exactly is "irreversable" ? This is really just pseduo-science, though it may not seem it.
We have no idea whatsoever what a logical model of anything to do with animal intelligence is, and hence, absolutely no idea what properties of animals (local to the brain or otherwise) are relevant to them implementing this logical model. To say any process of the brain is "computational" is either to say something useless (namely in the sense in which every process is "presumably, somehow computational, given a logical model") -- or, to say something pseudoscientific.
I would agree that animals, in modifying their environments by conceptualising them and developing skillful techniques to regulate themselves in response to them (ie., largely: intelligence), are highly thermodynamically irreversible systems.
This isnt a useful observation, given in pseudo-csci terminology, absent a correspondence between this physical facts and the presumed logical model of the computation going on.
If the whole of reality is an algorithm, it's one (via energy conservation) which requires zero energy to run. Ie., "logical bit" and "thermal bit" are radically different notions. They are connected contingently when one has an algorithm to-hand, and knows how it will be implemented.
There's nothing to be said about the logical bits of animal intelligence, ie., nothing to be said computationally, because we have no idea what they are.
No, there is no pseudo-science there, except when one takes the statements to mean more than they actually mean.
>Any correspondence between formal properties of the algorithm, namely, the logical model of the system and its physical properties requires (1) the algorithm; and (2) the implementation model.
>Speaking about "computational processes" and "reversibility" at all, absent these, is meaningless.
This is simply not true. We start from the assumption (as does all theoretical CS) that computational models are equivalent in capability.
We observe that the brain has inputs and outputs. We observe that the outputs are at least partially determined by the inputs.
From this, we can conclude rigorously that the brain does computation, and there is thus a computational process going on in the brain.
>What exactly, of the brain is the "computational process" ?
The correlation between inputs and outputs that follows a process in which information is transformed. This is readily observable.
>What exactly is "irreversable" ?
It is impossible to reconstruct the input from the output, therefore the computation is said to be irreversible, and is thus subject to various thermodynamic limits.
Therefore, we can rigorously conclude that the brain performs irreversible computation.
>We have no idea whatsoever what a logical model of anything to do with animal intelligence is, and hence, absolutely no idea what properties of animals (local to the brain or otherwise) are relevant to them implementing this logical model. To say any process of the brain is "computational" is either to say something useless (namely in the sense in which every process is "presumably, somehow computational, given a logical model") -- or, to say something pseudoscientific.
Now you're just taking what I said far above and beyond its actual meaning, and taking that interpretation to be pseudo-scientific. I said nothing about intelligence, all I'm saying is that there are computational processes going on inside the brain, and that those are irreversible. We don't need to know what algorithm is going on, nor do we need to know the precise model of computation, to be able to draw conclusions.
One of the conclusions we can draw is that the brain executes irreversible computation, and that the general algorithms implementing those computations must not be zero-work.
That is done without needing to know the details you seem to argue are necessary to draw such a conclusion.
We can also conclude more from this. We can, for example, place lower and upper bounds on the information being processed by various elements.
Now, someone could take this methodology and abuse it, or, as the article does, use it in conjunction with supplementary assumptions and go beyond absolute rigour.
>If the whole of reality is an algorithm, it's one (via energy conservation) which requires zero energy to run. Ie., "logical bit" and "thermal bit" are radically different notions. They are connected contingently when one has an algorithm to-hand, and knows how it will be implemented.
Now you're going way beyond what we can rigorously ascertain. If you consider the whole of reality to be a computer, then what are the inputs, and what are the outputs? Perhaps you consider the process of time to be an algorithm with the past as an input, in which case it is an algorithm that does require energy to run because it's performing irreversible computation, unless there is hidden state somewhere.
"computation" is only equivalent when it's calculative, ie., when the algorithm in question is merely computing some number.
The reason the LCD screen displays some output isn't because the electrical switches have some numerical state, its because they have some electrical state.
The sense in which "computer" describes any system is trivial, for there to be any empirical content to computational language, we need an empirical model of the relevant algorithms the computer is performing.
My kettle is also a computer: water is its state, boiling is the "computational process", and its change of state is the "number being computed".
But it is only a kettle because that "calculation" which computes a number is a magnitude which is implemented by the kinetic state of the water.
The sense in which "computers are equivalent" is *empirically empty*. There is no scientific content to this; it is merely a statement of pure mathematics. To use this language, of pure discrete mathematics, as-if it is informative about empirical systems *is pseudo-science*.
One may as well say the brain is a geometrical system which is extended in Euclidean space, and we know topologically, that all such systems are geometrically equivalent.
The world science studies (unlike that of pure mathematics), is extended in space and time, and has properties (eg., charge, mass, etc.). The number "34029348309384398" is only a frame of a video game when it names (charge, mass, extention, duration...) in a highly particular manner.
Unless you have an algorithm in mind, and a model which says how its numerical content corresponds to physical properties, you arent saying anything empirical at all.
>"computation" is only equivalent when it's calculative, ie., when the algorithm in question is merely computing some number.
Whoever said that computation has to be with numbers? Here we are seeing the brain as calculative, because the output of the brain is some function of it's input, is it not? Surely we can agree that this is an important, crucial, and interesting function of the brain, that is worthwhile to study? I'm not saying that this is necessarily all that the brain does, but it's an interesting and unresolved dimension of what the brain does, perhaps even the most interesting.
>The reason the LCD screen displays some output isn't because the electrical switches have some numerical state, its because they have some electrical state.
Sure, I don't see how that's an issue. Why does computation have to be on numerical states? It can be on any kind of state at all, even continuous states. Be it water pressure, base pairs in DNA, luminosity, anything at all that can represent data. In fact, some of the earliest algorithms were operating on lines and circles, which are neither numerical nor even discrete.
>My kettle is also a computer: water is its state, boiling is the "computational process", and its change of state is the "number being computed".
Sure, you could see it that way. But computation isn't the only thing your kettle is doing - and it's not the interesting part about it either.
> The sense in which "computers are equivalent" is empirically empty. There is no scientific content to this; it is merely a statement of pure mathematics. To use this language, of pure discrete mathematics, as-if it is informative about empirical systems is pseudo-science.
It's far from purely mathematical, nor pseudo-scientific. Sure, you could define almost anything to do some computation, but that doesn't mean the computation it is doing is worthwhile, or an interesting dimension of its operation. Certainly, however, the computational dimension of the human brain - that is, how it manipulates data - is the most interesting part of it. It's clearly informative - in this case we can conclude that the brain does irreversible computation, and thus establish various bounds on how it operates.
> One may as well say the brain is a geometrical system which is extended in Euclidean space, and we know topologically, that all such systems are geometrically equivalent.
Sure, we can say that. How is this helpful in this context? Understanding the brain as doing computation is certainly helpful, and perhaps understanding it as topologically equivalent to other objects is too, but I can't really see how.
>Unless you have an algorithm in mind, and a model which says how its numerical content corresponds to physical properties, you arent saying anything empirical at all.
Again, why does an algorithm even require a numerical content? All an algorithm needs is data, and we clearly have data going in and out, which constrains the physical system that is processing that data.
A computation is just an implementation of a function `f: {0,1}^N-> {0,1}^M`... if not, what is the meaning of the word at all?
All these phrases "manipulates data", "computation", and so on... they dont mean anything. A kettle "manipulates data".
You think you're saying something empirically significant about the brain using this language, but this language has no empirical content. It is a language of pure mathematics.
All of these claims are true of any system. What do we learn when we hear that "the brain is a computer" (or whatever else you wish to say). We dont learn anything.
If you can provide a logical model of the algorithm the brain is performing, and a model of *how the brain implements it*, then we learn something.
Saying "the brain is a computer" is basically no different than saying "it can be described, somehow, by mathematics".
> A computation is just an implementation of a function `f: {0,1}^N-> {0,1}^M`... if not, what is the meaning of the word at all?
Sure, that's a definition. It happens to be mathematically equivalent to the calculation of any observable signal in the real world, because there exist isomorphisms between {0,1}^N and the set of functions of maximum frequency f over time t, for example.
> All these phrases "manipulates data", "computation", and so on... they dont mean anything
They are certainly meaningful.
> A kettle "manipulates data".
Yes, it does, but in a very trivial and uninteresting way. Crucially, it does more than just manipulate data, but manipulating data is one of the things a kettle can do.
> You think you're saying something empirically significant about the brain using this language, but this language has no empirical content. It is a language of pure mathematics.
We certainly can say empirically significant things about the brain, since facts about the way in which it performs computations can be inferred, and from this we can infer information about how the brain must be structured. Trivially for example, we can infer that the brain must be able to have a minimum amount of outgoing total neuronal bandwidth by analyzing the makeup of outputs in can compute, and indeed we can do more.
> All of these claims are true of any system.
Of course, they are. But for some systems they can provide more insight than for others. The computation a kettle can do is insignificant and largely irrelevant to what interests us kettle-wise, so we don't really care. However, the way in which the brain manipulates data is much more interesting and intricate (and we can of course place lower and upper bounds on how intricate it is), and correspondingly we can learn much more.
> If you can provide a logical model of the algorithm the brain is performing, and a model of how the brain implements it, then we learn something.
I don't need to provide a model of the algorithm the brain is performing nor of how it is implemented to learn things. Thanks to various interesting results in computer science (in conjunction with results in the hard sciences), we can learn things about the brain without needing at all to know which algorithm it is implementing, and without knowing much of how it is implemented. At the extreme of knowing little about the implementation, we have physical lower bounds of physically possible implementations of any given computation, regardless of the algorithm.
CS theory allows us to infer facts about algorithms by knowing the computation.
For example, if I have a black box that inputs a list and outputs a sorted list, since I know that it is impossible to sort a list in the general case faster than n log n, and since it is impossible to know the original list from the sorted list, knowing the Landauer limit, I can infer for example a minimum power consumption. I don't need to know anything about the implementation or algorithm.
Sure, this is the trap of thinking of computer science as either being about computers (machines) or about science. As mostly a kind of pure discrete mathematics, we need to be careful.
Consider an algorithm which says:
while(true) state *= +1, state *= -1, state *= +1, ...
Now, identify the +1 state as the earth when in one-half of an orbit, and the -1 as the earth in the other half. And therefore the position of the earth as the logical bit ("the state") and its movement as the change to the logical bit.
This is "perpetual motion", but the technical name in physics for this is inertial motion, and its common. Motion itself doesn't require work, using that motion for work, requires work.
I was on a workshop once where the topic was entropy and somehow we got into a discussion regarding Maxwell's Damon... This strongly reminds me of that discussion.
The problem for the Damon is, that it needs to change the state of the trap according to the state of the incoming particle. And if you just hand wave "such a decision making thing exists", then you have your contradiction. But we tried, for several days, to come up with _any_ implementation (including fantasy materials) that could conceivably exist _and_ produce that effect. And for each and every attempt to build one, we came up _immediately_ with diffusive parts, where energy _must_ be lost.
We concluded, that while non of us would feel confident to _rule out_ a possible existence of Maxwell's Damon, we wouldn't _at all_ be surprised if it could be ruled out.
So while you are entirely correct, with enough hand-waviness, you can build reversible computations. But I have yet to see an argument, where a _potential_ implementation of one is argued to the end.
It's sufficient for my purposes just to show that "bit" in a logical model and "bit" under some idealised thermodynamic thought experiment are radically different notions.
Reality, i am sure, has many systems which are settable and measurable and changeable at some minimum energy... and which can interface with devices of interest. For any given problem, the limit case energy requirement is defined by the needs of the algorithm. If we require setting a highly complex input state, and if we require interactions with certain devices, then we've immediately ruled out a great deal.
These systems would provide you with a certain kind of "limit-case correspondence" between "logical bits" and "ideal physical bits" --- but we dont know what this system is. You dont get it from just playing around with units, nor these kinds of thought experiments. You need to know what algorithm you're talking about, and what it's requirements are.
If the algorithm is understood just to be "the whole of reality" and if we suppose that it is fundamentally just aggregates of discrete states being flipped (to me, highly unlikely).... then the energy requirements are Everything... which sum, i imagine (via energy conservation), to zero.
An enjoyable and approachable text with more detail on reversible computing and energy expenditure from the perspective of physics is the Feynman Lectures on Computation[1].
If you really want to be pedantic, even infinite inertial motion isn't possible, because a true vacuum is impossible, and there is thus necessarily drag somewhere.
Also, I've said that before, but we already know that brains operate under an irreversible computation model.
> Also, I've said that before, but we already know that brains operate under an irreversible computation model.
We don't know that brains operate under any kind of computational model at all. It's often postulated, but it's not proved. Every attempt I've seen at a proof reduces to begging the question.
Edit: To be clear I’m not saying a computational model of the brain can’t be a useful tool. Newtonian physics works quite well quite often even though reality isn’t Newtonian.
To be clear, I'm not saying that everything the brain does can be modelled by a known computation model. All I'm saying is that the interesting part of what the brain does is computation, in that it takes in data, operates on it, and returns data. It does this in an irreversible manner because you cannot determine the input from the output (nor a significant part of it).
If there is any model of how the brain works it will be a computational model. Perhaps a new one, and perhaps a radically different one, but it's still going to find the definition of a computational model.
Ultimately, the open question here is this: are uncomputable functions just a mathematical fancy, or do there really exist processes that can only be fully correctly described by uncomputable functions?
Personally I lean towards latter view. I’m the first to admit that I have no proof. It’s just my belief, because I find the metaphysical evidence compelling. I don’t object to investigating the former possibility, but I also don’t care for it just being baldly asserted.
If the answer is affirmative, that means that science will probably never be solved and we’ll just have to content ourselves with incremental improvements in our understanding. That’s observably been the case up until now. Granted even if all processes are in fact described entirely by computable functions we might still never discover what they are.
I hope it’s clear how all that relates to the concrete problem of understanding human cognition and the brain.
I'm nowhere near smart enough to even begin to conceive of a mathematical framework for taming noncomputable functions in a pragmatic way, but I earnestly hope some genius comes along who is, supposing that noncomputable functions are needed to completely describe our reality.
Well, those functions are called uncomputable because, as far as we know, there is no repeatable way of computing them, right?
So if the human brain was able to compute functions that Turing machines can't, it doesn't mean those functions are uncomputable, it just means the Church-Turing hypothesis is false.
The Church-Turing hypothesis just means that the lambda calculus and Turing machines are isomorphic. It doesn’t actually tell us anything about reality, no matter what enthusiastic misunderstandings might say.
No, it doesn't. The Church Turing hypothesis is much stronger - it states that all real-world computation can be done by lambda calculus or Turing machines.
That's why it's a hypothesis and not a theorem - it's pretty easy to prove that lambda calculus and Turing machines are equivalent if you have the mindset of a programmer.
Put in other words, the Church Turing hypothesis is that there is no computing model higher in hierarchy than the TM and lambda-calculus.
> It doesn’t actually tell us anything about reality
Well, of course it doesn't tell is anything about reality. That's because no one managed to prove it, and I suspect no one ever will.
Ah yes. Fair enough. I confused the thesis with the hypothesis. This comes back to the begging the question I initially objected to.
Also, the notion of highest computing model is itself begging the question in the sense that the framing assumes a computational model.
So much for undergraduate philosophy. What’s your opinion? Are uncomputable functions just fanciful irrelevancies or are there processes in reality that can’t be described by computable functions? If so or if not can you propose how we might know?
I'm actually relatively confident the two are the same thing. I don't think any special name was given to the equivalency between lambda calculus and TMs.
> This comes back to the begging the question I initially objected to.
I don't think it does! Of course if someone assumes the thesis is true then they're begging the question, but I don't think that's what I am doing here, indeed :
> Also, the notion of highest computing model is itself begging the question in the sense that the framing assumes a computational model.
The definition of computation I'm using is that there exists a process somehow that can go from the input to the output. If that process can't be replicated by a TM then so be it, it just means Church-Turing is false.
Actually, the original wording that Church used was "any function that can be computed by a mathematician.." (could be computed by a TM).
So I think that it's reasonable to frame it as computation - it would be begging the question if I assumed church Turing was true.
It doesn't help that the uncomputable function definition assumes it! But in reality that a function is uncomputable doesn't actually mean there is no framework of computation that can compute it, just that a TM can't.
> So much for undergraduate philosophy. What’s your opinion? Are uncomputable functions just fanciful irrelevancies or are there processes in reality that can’t be described by computable functions? If so or if not can you propose how we might know?
I've honestly spent so much time on the question I don't even know anymore. I'm leaning towards the side that uncomputable functions can actually be computed in reality. There's an interesting paper here : https://www.sciencedirect.com/science/article/pii/S221137971... which also links to an equally interesting 2002 paper that provide theories as to how you could compute uncomputable functions in the real world (with or without blackhole evaporation), but this is still speculative because our best theories in physics are still iffy at those scales, and obviously the physics is beyond my understanding here.
Anyways, I hope this answers the question of how we might know!
> The definition of computation I'm using is that there exists a process somehow that can go from the input to the output. If that process can't be replicated by a TM then so be it, it just means Church-Turing is false.
> Actually, the original wording that Church used was "any function that can be computed by a mathematician.." (could be computed by a TM).
That explains the disconnect. My working definition of computation is essentially the one Church gives there, with the understanding that he was talking about specifically computing the exact value of some function that could also be computed with a slide rule or some other effective procedure.
> So I think that it's reasonable to frame it as computation - it would be begging the question if I assumed church Turing was true.
Even assuming C-T, it's an enthymeme and not a syllogism. The unstated leg is something roughly like "the universe is a computer." Assuming that is basically assuming what is to be proved with respect to whether or not the brain is a computer or some higher category of reckoner that happens to be able to do everything a TM can, at least with enough time, pencil, and paper.
> I'm leaning towards the side that uncomputable functions can actually be computed in reality
For clarity it would probably be sensible to use a term other than computation for determining an exact value in this case. Anyhow, I believe I follow and I lean that way too.
There's a related metaphysical question too. I'm not sure how to state it exactly, but it's implied by questions like "When I throw a ball is reality computing a parabola and applying necessary modifications?" The alternative is that whatever way reality determines how that ball moves, it's not by what we'd call an effective procedure.
> Anyways, I hope this answers the question of how we might know!
> Even assuming C-T, it's an enthymeme and not a syllogism. The unstated leg is something roughly like "the universe is a computer." Assuming that is basically assuming what is to be proved with respect to whether or not the brain is a computer or some higher category of reckoner that happens to be able to do everything a TM can, at least with enough time, pencil, and paper.
I don't think I'm making that assumption. I'm making an assumption that the process by which the brain determines outputs and inputs has some degree of repeatability - which I think is more than reasonable. Whether the brain is or isn't a computer in the classical sense of the word, there is clearly computation happening that relates inputs to outputs, right?
That is to say, if I took the same person and made them live exactly the same life (that is, everything outside how they react reacts the same way), I'd expect the outcomes as we repeat it more and more to approach some kind of distribution after a fixed amount of time t (that diverges as t gets larger exponentially). Note that this doesn't assume the presence or absence of free will, by the law of large numbers, it works either way.
From then on there is some kind of effective, repeatable procedure, so the brain does do some kind of computation. And of course that's not the only thing the brain does. But we don't need to have the universe as a computer, just the brain.
> There's a related metaphysical question too. I'm not sure how to state it exactly, but it's implied by questions like "When I throw a ball is reality computing a parabola and applying necessary modifications?" The alternative is that whatever way reality determines how that ball moves, it's not by what we'd call an effective procedure.
That's an interesting question. My attempt at an answer is that, if we were to understand the universe as a continuous process which can be defined by some kind of (recursive) semi-random, differential rules - but with maximum frequency - then yes you could say that reality is an ongoing computation. Or you could take another crack at it and think that reality is just as set of probabilities and joint probabilities that merely get (partially) sampled - in which case there isn't really any computation happening, we're just observing a tiny sliver of reality as the ball travels the probability space. There are certainly even other ways to see it. I think we it's a matter of perspective as to how you see it.
Thanks for the conversation! It is great and I've been able to develop my understanding of these concepts :)
Can you provide a non-theoretical example where an extant inertial body actually does no work? I believe this is impossible. You’ve just moved the assumptions to where they frame your perspective better than that other thing which competes with your perspective.
Energy is always conserved. Just define the computer to be the system in which energy is conserved, and there you go.
A "computer" is a formal pure-mathematics notion, it is just a certain sort of "discrete mathematical model". One can define a computational model of any physical system, and hence, find computers in which energy is conserved.
If we could formalize "observe the result" we would, and it would require exchanging energy with the computer, breaking its conservation. The only way to avoid it is to say that we were always a part of the computation, but things seem to get very odd from there.
You mention elsewhere that the whole universe might be construed as a computation in which energy is conserved. Why not take the absurd reduction for what it is though, and conclude that it means there's something wrong with calling anything-at-all a computer?
It's not an absurd reduction, it follows from the definition of a universal turing machine. This is how completely non-empirical theoretical models of computation are: they are devices of pure mathematics.
This is exactly my point.
If you want to use the formal machinery of computer science to say something empirical, you cant just hijack terminology and speak in this pseudoscientific way, ie., "the brain as a computational process" (etc.).
What is the empirical content of such a claim?
There might be some if you defined what algorithm the brain was computing, and what the empirical correspondance between the algorithm and the brain way (etc. etc.) -- but no one is doing this.
We are speaking as-if somehow "universal turing machines" had some empirical content, that there's something insightful about labelling the brain this way (vs. anything at all). There isnt.
Sure, it's merely an idea that is believed, like the Greeks believed that the night stars were heros, or Galileo that Jupiter had satellites, or Newton that F=ma everywhere and that God had set up the solar system.
Some of these theories work, others don't. Nobody had a truly solid reason to believe them before it was clear they worked (in the case of those that did). Scientists all believe things they have no strict reason to -- when they're wrong they're wrong, and when they're right it's a scientific discovery.
Planck had no reason to postulate the energy packets when he came up with them. He described it as a move of pure desperation. The explanation (such as it is) for why this worked wasn't developed until decades later.
I don't personally believe Turing machines reveal much about the human brain, but you're verging on a more general claim about how science should be performed. That to hypothesize, work within a model, or even get something out of it you need to already have a precise description of the phenomenon being described and "why" the model might work. None of that is required though.
To be clear this all applies much less to "bread and butter" science. What we're faced with here is a process that has no convincing description.
To convince people to give up on computational models of the brain you need to convince them that it doesn't work. Nothing has been revealed by it in 60+ years. It's never predicted anything. Neurobiology and pharmacology at least have some results to show about a real brain. What you're basically engaging in instead is philosophy -- "pure" math is distinct from the empirical world, what if the whole universe, science requires this specific method, ... But it's better if we can dismiss a scientific idea on scientific grounds, rather than philosophical considerations.
Are you trying to argue that reversible computing is classical computing, that the second law of thermodynamics doesn't exist, that modelling the brain as a classical computer/thermal process (ie. something that creates information) is presumptive, or something else? All of these concepts have been explored in depth for decades, and you don't appear to be acknowledging the existing body of work.
None of those things. My point is that logical models of computation dont, in themselves, have empirical content. That the language and ideas of theoretical computer science are, as with any area of pure mathematics, only empirically useful insofar as we establish how a system corresponds to a formal model.
This is patently untrue though. Information theory exists. Entropy exists. Irreversible computation is fundamentally tied to entropy increase. It doesn't matter what system you are using, if you create/delete a bit, you create heat. No known systems are limited by this, but is looming.
> With the above assumptions, this paper would conclude that only one operation in a transistor is "computing" anything
Why? A transistor is just an arbitrary boundary. You could as well draw a box around any combinatorial circuit and just call it a "computational element". It has inputs and outputs and a switching energy for each unit of computation it does.
This is just the process of breaking computation up into abstractions--"boxes".
Clearly we must do this, otherwise we'd have to reason about individual electrons.
Electrons themselves being just easily-measured boundaries around littler bits we’d be forced to acknowledge that our entire modeled reality is constructed.
Probability suggests the same reality is there across a wide variety of experience -- or at least the experience that has been experimented on and discussed by the scientific community.
Adding an additional comment here, as I'm thinking to myself about how I'd clarify the issue further.
Suppose we write a program in `C` which requires 32 bit array, sets an input state, operates on the array, and produces an output state. This program is the logical model, here requiring say 32bits and 10 operations/bit average. So we have 320 logical changes to the array (which btw, wont be purely parallel or serial).
Suppose the most efficient CPU we have requires E_thermo energy for this whole process (setting the 32bit input, operating, reading the output).
Now, using somewhat disputed ideas about physical limits, E_thermo say implies we've used 320,000 'bits' thermodynamically, ie., somehow the CPU has produced an equivalent of 320,000 "energetic changes".
So the CPU has an efficiency, in some suspicious sense, of 100,000 thermo-bits / logical-bit. For each single on-average change to the array we see in the `C program`, we theoretically measure an equivalent heat of 100,000 changes.
Now, how efficient is the brain at computation? Well since we have no logical model of the computation its performing, there's absolutely no way of answering that question.
So why do people say it's efficient? Well because any apparently equivalent algorithm we come up with (in, eg., `C`) to do even basic things that the brain does, requires vast amounts more energy. Eg., processing images on a GPU requires, say 300 W, and our whole brain uses (a claimed) 20 W.
I think the resolution to this problem is that the brain(-environment-body) system implements computation in a radically different manner than anything like a CPU, GPU, etc. I suspect that computation happens at every scale: molecular, sub-cellular, cellular, neuronal, inter-neuronal, nervous-system, body, body-environment, etc. And across the whole body, and using energy in the environment to maintain state.
However, I find the use of the term "computer" and "computational" mostly just productive of pseudoscience. It is a basically meaningless term that only produces confusion. The only useful area where it helps is when handling logical models of algorithms -- models largely absent from the whole of empirical and theoretical science.
What did you mean by "computation" then? Honestly curious, because it seems you could also say a digital computer does it at all scales: semiconductor junctions, capacitors, clocks, their multipliers, transistors, up through logic gates, blocks, buses, ..., and all the way out to the fans, power supply, etc?
I mean that if "intelligence (etc.)" in the relevant sense can be described by an algorithm, its implementation will not just depend on properties neurones have as neurones. It'll depend on properties at various scales.
This isnt quite true for digital computers, in the sense that the only property which implements the algorithm is the electrical switching state... that this state depends on, eg., silicon isnt quite the same as silicon-properties the ones "doing the work".
This is part of the trouble talking about "computation" at all, which is a nearly empty term in my view.
Consider an example. A "baking a cake" algorithm can be implemented by a person or a factory, if specified correctly. In one case, the organic properties of a person appear "essential to the implementation", but they arent. But there are some algorithms which do require organic properties (eg., if we consider the behaviour of a cell wall an algorithm) to implement.
The question is in what sense are the implementation properties "doing the computational work". Any electrical system which provides switching states is "doing the work in the same way", ie., there are certain "macro-properties" which do the work, regardless of their micro-property dependencies.
I think in the case of intelligence (the brain-body, etc.) the properties "doing the computational work" arent the macro-properties of neuronal interaction. They're properties at various scales (subcelluar protein behaviours; celluar interation; neuronal communication; bodily organization; environmental driving; etc.).
In particular, i think it's the self-adapting self-organizing properties of certain organic systems that allow them to implement "the intelligence algorithm" -- this algorithm, if it can ever be specified, I do not think will be implemented only by macroscopic neurone firing -- i'd say that'll be a very minor part of its implementation. I think most of it will come from self-organization properties from the subcell to the body.
I find that plausible, but I am a layman. At the very least there's no evidence that the scales can be separated, because despite all attempts a brain in an organism is the only thing recognized as showing "intelligence".
What you describe seems very similar to what's happening in an ecosystem. Our understanding of ecology is maturing though while these fields seem not to be. We have made very impressive calculators of course, and I don't mean to minimize them per se -- mainly people are just reckless with their claims.
It an interesting connection, I think, that natural selection is the only other process that is arguably doing something intelligent. This is a controversial idea and might well be wrong, but I don't believe that's proven merely by the fact that some key mechanisms have no intrinsic "goal".
I agree that the language is imprecise, and (especially for what I just said) has misleading connotations that are unfortunate...
I'm really curious to learn more about the processes you've alluded to and their relations, if you can recommend any reading or care to elaborate.
I’d only add that neurons are part of an exponentially higher dimensional calculation process than binary, taking input from a huge number of chemicals, proteins, electrical signals, insulation variability, and who knows what other undiscovered dynamics that are at play. There is no 1:1 comparison between computers and the brain anymore than there was between aqueducts and the brain.
Before I started reading I thought, "alas, its going to be pseudo-science again right?".... and as far as I can see, yes, basically it is.
1. Logical bits are not thermodynamic bits, so a thermodynamic analysis of the brain can only be compared to a thermodynamic analysis of CPUs (if one wishes to compare at all)
2. Every phrase which begins "suppose, assume, conjecture" introduces fatal assuptions into the whole project, extremely few are defensible. The idea that a single neurone is "computing" whilst its "communication" with others is non-computational in a logical sense is clearly false. One can make a thermodynmic distinction between energy "of the neurone" and "of thier 'communication'" this distinction has no relevance to a computational-logical model of the brain as a computational system.
3. The frequent reference to thermodynamic limits of "computation" (in a logical sense) as a baseline for comparison with (abitary) parts of the brain, is meaningless. The thermodynamic efficiency of the brain is interesting only insofar as any possible logical model of the brain seems to imply vastly more "computational resources". And vastly more thermodynamic resources than CPUs have. Physical limits on theoretical computation pertain, if they are even themsevles coherent (which is disputed), to the absolute minimal possible "piece" of reality, not even, i'd say, to any measurable phenomenon. As soon as a system has to engage in measurement, i'd say it would be millions+ times less "efficient" than 'physical limit's would suggest.
One "trick" to see through the pseudoscience of computational neurobiology is simply to apply its methods to actual CPUs and computers. With the above assumptions, this paper would conclude that only one operation in a transistor is "computing" anything, the energy required to do that is "the energy of computation"... and the rest of the energy used across CPU(-RAM-etc.) was "merely communication".
As-if all algorithms were logically specified as a purely parallel series of switch-flips. No, algorithms (in a computation-logic) sense have nothing to do with switch flips. And their implementation on digital computers requires many, serial and parallel and "thermodynamic communication" between them to perform the computation in question.