climate change
Re: climate change
FWIW, I have studied computer science, and I think rotting's AI-god is barmy. I don't think it's very good science fiction, but even if it were... it's a chimera. We don't have an AI god, and we won't have one in time to address climate change.
Re: Venting thread
Hah!zompist wrote:And while you're terribly worried about "respecting" Third Worlders
-
- Avisaru
- Posts: 409
- Joined: Thu Sep 07, 2006 12:25 pm
Re: climate change
I would urge you to distinguish among my arguments supporting the workings of the AI, the arguments for creating the AI, the arguments relevant to climate change, and so on. In particular, I never said we should create the AI to handle climate change. I don't think it would be possible to create the AI in my lifetime, or indeed for millennia. I only claim that humanity should work towards eventually creating a kind of basic redistribution system for wish fulfillment, and I don't see how that might be possible without AI.zompist wrote:FWIW, I have studied computer science, and I think rotting's AI-god is barmy. I don't think it's very good science fiction, but even if it were... it's a chimera. We don't have an AI god, and we won't have one in time to address climate change.
PS. You would have to specify which aspect of the AI you think is barmy from a CS perspective. I can't tell. If you're just talking about short term impracticality, I think I addressed that already. BTW, I have said all this before.
Last edited by rotting bones on Sun Dec 31, 2017 11:12 pm, edited 1 time in total.
If you hold a cat by the tail you learn things you cannot learn any other way. - Mark Twain
In reality, our greatest blessings come to us by way of madness, which indeed is a divine gift. - Socrates
In reality, our greatest blessings come to us by way of madness, which indeed is a divine gift. - Socrates
-
- Avisaru
- Posts: 409
- Joined: Thu Sep 07, 2006 12:25 pm
Re: Venting thread
... towards definitively preventing irreversible climate change. I don't see how anyone could challenge this.zompist wrote:nothing will ever be done
If you hold a cat by the tail you learn things you cannot learn any other way. - Mark Twain
In reality, our greatest blessings come to us by way of madness, which indeed is a divine gift. - Socrates
In reality, our greatest blessings come to us by way of madness, which indeed is a divine gift. - Socrates
Re: climate change
Since you ask, well, all of it.rotting bones wrote:PS. You would have to specify which aspect of the AI you think is barmy from a CS perspective. I can't tell.
-- AI researchers are particularly prone to overestimating what has been done and underestimating what remains to be done.
-- Programs are almost never bug-free. Or crash-free. Programmers hate debugging and can't be trusted if they say a program has no bugs.
-- Programs do not magically acquire the ability to solve problems their programmers can't solve.*
-- Programs acting on data outside their area of competence generally do worse than humans would.
-- The more complicated a program, the harder it is to improve and debug.
-- If a marketing person claims that a program does such-and-such, that's about one degree more credible than a snake-oil salesman.
-- Even if a computer system performs well, it's subject to hacking.
-- Programmers are notoriously bad at UI... that is, at dealing with non-programmers. And yet precisely the function you wish to assign to an AI, government, is nothing but dealing with non-programmers.
-- Programmers are one of the worst choices of people to entrust world-designing to. They tend to hubris, reinventing the wheel, and ignorance of real-world problems.
* You may think this is obviously wrong-- can I really beat the world's top chess program? But of course I can, if I'm given enough time. (All I have to do is see how many moves it looks ahead, and use the same algorithm, but look one more move ahead.) The secret weapon of the computer is speed. So when you're looking at domains to throw a computer at, it will generally do well when speed is the major factor. So you absolutely want computer assistance when firing at an incoming rocket. Speed is not the major factor in government.
I know that singularitarians have 'answers' to some of these objections, but it's mostly handwaving that couldn't pass a serious internal design review. For instance, you can't fix things by explaining that "mere meatsacks won't write the One True AI; AIs will write it!" If your move is really "the AI is effectively an alien intelligence beyond human understanding", you simply cannot also maintain that it sticks faithfully to the Three Laws of Robotics or whatever human-friendly guidelines some earlier programmers installed in some now-obsolete code. It can't be both a god and a slave.
And believe me, I know how intoxicating computers can be. I had the AI bug myself for years, and in my own sf future, I have AIs being developed eventually. But you know, just as I wouldn't trust a teetotaler to be a good wine advisor, I wouldn't entrust government of human beings to people who hope to turn themselves into computer programs.
Re: Venting thread
You're changing the goalposts; your original comments were about "allowing global warming to continue".rotting bones wrote:... towards definitively preventing irreversible climate change. I don't see how anyone could challenge this.zompist wrote:nothing will ever be done
It's likely that the 1.4C or so global warming we already have is permanent. But I assume this isn't what you were talking about-- none of your remarks make any sense if they're about changes we've already made.
I don't really know what you mean by "nothing will ever be done" when things are being done. US and European emissions are going down (despite Trump). In 2015-16, global carbon emissions actually held steady; unfortunately they're up again in 2017. But again, China and India are both taking measures to cut down on coal usage and build up solar power.
I get that collapsing into defeatist fatalism can be tempting. But when things are bad, exaggeration and apocalyptic fantasies are more annoying than insightful.
- Salmoneus
- Sanno
- Posts: 3197
- Joined: Thu Jan 15, 2004 5:00 pm
- Location: One of the dark places of the world
Re: climate change
This just makes no practical sense. You think our failure to correctly value long-term costs can be solved by... slightly rejigging our constitutions!? In a way that only a computer could calculate?rotting bones wrote:What? I meant create a democracy immediately after revolution with institutions balanced to handle climate change.Salmoneus wrote:Promoting dictatorship on the grounds that, after much terrible bloodshed, it might be possible to reinstate democracy one day (and presumably then reinstitute dictatorship again to solve new problems as they arise, ad infinitum) is still promoting dictatorship. Particularly when it's dictatorship by a literal superhuman.
How is this utopia implemented? By asking existing governments to reform themselves the way you prefer, instead of the way their citizens prefer? That's not going to happen. By forcing them to reform? Well, they've got the guns and the nukes and you have a big calculator, so I don't see what your practical route to reform is here.
But more so, I'm just perplexed by this weird combination of wild utopianism (there is some slightly different arrangement of democratic structures - a change in term limits, perhaps, or a reassessment of adjudication procedures for state-federal disagreements - that will magically completely solve these fundamental problems of human psychology) and fatalist pessimism (no human being could think of these reforms, and no progress at all is possible without exactly these reforms). This paradoxical and improbable set of beliefs doesn't seem to have any perceivable foundation.
I mean, I can at least understand the reasoning behind "humans can't solve these problems, so an AI will have to force us to do so". But "humans can't solve these problems, but if an AI forces us to adjust some of our democratic institutions a bit, suddenly all the problems will go away"...? I don't get it.
Here are the six fundamental problems of climate change:
- humans weight the values of immediate costs and benefits more highly than those of temporally remote costs and benefits (i.e. it's irrationally difficult to accept a cost today even for a benefit tomorrow)
- humans weight losses much higher than gains, and weight gains much higher than the deprivation of gains or the amelioration of loss (i.e. people are irrationally opposed to declines in standards of living, but also irrationally disinterested in reducing the scale of a decline in standards of living; this means that any proposal along the lines of "accept a concrete loss now in order to reduce the scale of loss tomorrow" is directly against the human grain, even allowing for the rate of future discounting)
- humans weight the value of costs and benefits that are personally close to them more highly than those of those who are remote from them (i.e. it's hard to get them to accept costs to themselves and their friends and family to produce benefits for strangers they will never meet, in far away countries about which they know little)
- humans assess probability irrationally, including by overestimating both moderately large and extremely small probabilities. For instance, in the 2016 election people overestimated the odds for both Clinton (large but not near-certain) and Stein (virtually zero) while underestimating the odds for Trump (small but significant). Similarly, people overestimate wild improbabilities that would result in climate change just solving itself, while also overestimating major threats that are not certain (that we won't be able to do anything about it), and underestimating small but significant probabilities (like, we can fix the problem). [they also overestimate the catastrophic worst-case scenarios].
- humans weight risks asymmetrically. They tend to be suboptimally (on average) risk averse. That is, it takes a lot of potential gain to outweigh a known loss, and a lot of potential loss to outweigh a known gain. Any definite policy to combat climate change is known, but its benefits are only potential.
- the logic of single-play and short-run games often encourages defection. Climate change is one such example - specifically, it's an assurance game. Nobody wants to be the one bearing the burden (particularly because one county sacrificing probably won't have any effect anyway) and everybody wants to be the freerider (because if everyone else sacrifices, you get the benefit even if you don't sacrifice). (i.e. it's extremely hard to organise different people to act for the collective good through co-ordinated action, even when everybody agrees on the desired actions)
There are, of course, other more superficial problems (like lack of awareness of the issue), but these six are fundamental. Four of them are fundamental human irrationalities, one of them is a human idiosyncracy that results in, on average, suboptimal outcomes*, and one of them is a logical result in game theory. None of these problems will ever go away under any democratic, or indeed any human-controlled, decision-making regime. This could be an argument for a robot dictator. But it's no argument at all for a robot lawgiver who then lets humans make the actual decisions. Because the root causes of bad decisions will still remain.
*our risk weighting is on average suboptimal, but it can't be called irrational, because the value of risk is incommensurable with the value of utility, and so any attempt to assess the value of a given risk/utility portfolio requires arbitrary price setting for risk. For instance, a conservative portfolio (lower average reward but lower risk of catastrophe) cannot objectively be said to be rationally inferior to an aggressive portfolio (higher average reward but higher risk of catastrophe), or vice versa.
...no, we don't. You may infer that the heat was burning my fingers, and that I avoid burning my fingers, and that I assessed that letting go of the hot object was the best way to avoid the heat, and that that's why I dropped the hot object. You may infer that. But I did not give that as a reason. I may not even know that, for instance, humans instinctively avoid burning. So that instinct may be an explanation, but it need not be my reason (it is, of course, a reason for dropping a hot object. But it is not necessarily my reason. For instance, if I'm immune to pain, I may consciously be aware that I should drop the hot object to avoid injury, but not have any instinctive impulse to do so). And even when it is my reason, it is not necessarily the reason I give. But in any case, what you suggest is not a decision procedure. At most, you could say that a reason was a combination of a given decision procedure and a given data input, although even that would be controversial. [if I have the procedure "if touching something hot, let go", that's not a reason to let go of anything unless we also have the data input 'this is something hot'. So the maxim itself is not inherently a reason unless it is categorical. Cf. Hume and Kant.]Yes, we do. The inference is the heat was burning your fingers, and it is known that humans instinctively avoid that.Salmoneus wrote:No, we don't. We normally give our reasons, rather than our decision procedure. If someone says "why did you drop that bit of metal?" and you say "because it was really hot", that's a reason, but it's not a decision procedure.
The fact that you can model something as if it followed some algorithmic procedure does not mean that it does.Yes, it is. That is indeed what everyone does all the time, with the proviso I explain below.Salmoneus wrote:No, it's not. I mean, you could have an algorithmic decision procedure, but I don't think many people would advocate that even in theory, and certainly we don't do that in practice.
It "matters" because if you use words in an inconsistent way, the sentences you form are incoherent.Yes, they are. There is no category error. Decisions are made through an analog distributed synthetic-biology-type system, but that is irrelevant. I only care about the level of representation. It is because you do not understand this that you think there is a category error where there is none. The problem is that you think I'm attacking this question as an analytic philosopher and trying to articulate what things are in ways that can be challenged only by phrasing things in a convoluted manner. But I'm attacking the problem as a computer scientist, and in our discipline, we don't care what happens beyond the level of representation. What things are is an interesting question, but one we abstract away in our solutions. This always produces category errors when taken literally, but that doesn't matter even a little bit at the level of algorithmic analysis. It is common practice among the sciences to carve out their niches of mutual irrelevance in this way. It is an interesting question how this is possible in computer science in particular, especially if you believe in philosophical materialism, but analytic philosophers don't have an answer to this question. We just know that it works by induction for some reason. We don't even know how it is possible to characterize the "space of algorithms" per se, but again that's a different question. Even speaking as an analytic philosopher, not all analytic philosophers agree with the position you have adopted to attack mine.Salmoneus wrote:No, they're not. This is a category error. Even if you believe decisions are made algorithmically, which they're not, reasons are the input, not the process. You can't "run" a reason - perhaps you're confusing 'a reason' with 'reasoning'?
You may well know how to program a computer. But you are advancing theories in psychology, philosophy, and political theory, and you clearly do not know what you are talking about; being able to program a computer does not give you omniscience.
Well, I've never been "Satan" before, but I'll take that as a compliment. However, what you are doing here is your usual goalpost shifting - you say one thing, and then back away as soon as challenged, pretending to have said something else entirely.This whole train of reasoning is irrelevant, not because I'm putting you down, but because the algorithm my AI is running is based on fulfilling people's own wishes. Therefore, I'm not imposing my particular overarching vision on others. I'm only seeking to raise the baseline of wish fulfillment. I think I can get all non-monsters to agree with this aim. So to answer your question: No, I'm not God, but I know Satan when I see him.Salmoneus wrote:Quick question: are you God?
You say "I'm not imposing my particular overarching vision", but you also say "I prefer democracy for a reason. What if I could have an AI which ran that reason as its algorithm?" and "how could that AI be totalitarianism? Non-totalitarianism is part of the algorithm I use to pick democracy." These things are obviously incompatible. You cannot impose rule by an AI that is programmed to follow your priorities and overarching vision, and then say you're not imposing your overarching vision.
More particularly, whether your AI is popular or not is irrelevent to the purely logical point I made. Not a political point, but a logical point, which you have ignored. Your argument is logically flawed. It is fallacious.
Let's recap: you say "How could democracy possibly be superior to an AI whose raison d'etre is the criterion by which democracy is superior to other forms of government?". Well, the answer is, "because you defined the AI as operating by the criterion that lead you to think that democracy was superior, which, unless you're God, may not be the reason that democracy is actually superior." I'm sure you can understand this fallacy:
- I think democracy is good because it maximises X
- I build my AI to maximise X
- therefore my AI cannot be worse than democracy
This ONLY WORKS if you assume that "the reason I think democracy is good" and "the reason democracy is good" are identical. But, obviously, unless you are God, they are not necessarily the same. A big part of why people support democracy is precisely the recognition that what I perceive as good personally may not be what is actually good.
This is a strictly logical point about the validity of your reasoning. Complaining about me being Satan, or shifting the discussion to whether your AI 'fulfills people's wishes', does not address this glaring fallacy.
No, obviously that does not follow. What people imply by saying "democracy is the least bad system" is that democracy has vices; that does not mean that it does not also have virtues. Most people would agree that self-government and freedom from tyranny are two of the goods that democracy seeks to maximise . People disagree, of course, as to how those goods should be weighed against other goods, like freedom from terrorist atrocities, but most people would agree that these goods - which are intrinsic, not instrumental - are goods.This is not true because I don't support democracy just because democracy is democratic. Nor, I claim, do most democrats. For example, many people are of the opinion that democracy is the least bad form of government. This would make no sense if they wanted democracy for the sake of democracy. It follows that people want democracy for some other reason. My AI is intended to optimize for that reason.Salmoneus wrote:It should also be pointed out that there seems to be a confusing in your basic idea. If an overman, using your terminology, "ran" your reason for democracy, then the result would by definition just be "democracy". It wouldn't be "lower the voting age to 16" or "raise taxes on whiskey". An Overman that is expected to make real moral decisions must have an entire moral framework, not just "democracy is good", or even "democracy is good because X".
Maybe people say that where you live. To me, that seems an extraordinarily right-wing sentiment to have. Many people support democracy even though they personally might benefit from, say, a military dictatorship. The military, for example.In particular, people say they want to live in democracies because that is the best way to fulfill their dreams. My AI only seeks to raise the baseline of wish fulfillment among humans.
But of course, the very fact that we are having this discussion shows that your views on what is valuable in a political system are not universal, so any attempt to impose them on everybody else, with or without a computer program, is oppressive.
Dude, you're relying on the ontological argument. Catch up to the last 600 years of reasoning and people may stop treating you like a child. Your argument was utterly ridiculous. And I demonstrated precisely why - but, as always, you're more interested in claiming offence than in actually addressing your irrational thinking. This is why most of us generally don't engage with your posts anymore.This is totally not condescending in any way! I thank you muchly for the most respectful conversation I've had in years!Salmoneus wrote:By coincidence, I have a non-totalitarian invisible pink unicorn here. It really exists! I know, because existence is part of the algorithm I used to choose which invisible pink unicorn to have. So this one definitely totally exists.
No, it isn't. However, any democratic government, or in this case non-democratic government, that considers itself to be fully sovereign and has the wherewithal to exercise that sovereignty - that is, a government that is not subject to constitutional limitations - is by definition totalitarian in constitution, even if it is restrained in practice.Any democratic government that seeks to redistribute resources is totalitarian!Salmoneus wrote:Certainly any dictator who could, for example, impose energy use rationing on every family on earth would be totalitarian!
As I'm both non-religious and politically secular, I find the strawman bizarre. Other than demonstrating your own continued strange obsession.We should totally return to traditional religions like Catholicism and Islam that never impose forms of social organization that seek to regulate our lives in any way!
While at the same time bringing about a global revolution and reducing the amount I use my lightbulbs. This is disingenuous.As I keep telling you every time, the AI will not seek to regulate people who do not want to be regulated by it except in cases of dire need.
I think I'll leave this here. As always, it is clear no headway can be made, because you are not willing to settle down to serious discuss any point in clear terms.
Blog: [url]http://vacuouswastrel.wordpress.com/[/url]
But the river tripped on her by and by, lapping
as though her heart was brook: Why, why, why! Weh, O weh
I'se so silly to be flowing but I no canna stay!
But the river tripped on her by and by, lapping
as though her heart was brook: Why, why, why! Weh, O weh
I'se so silly to be flowing but I no canna stay!
- Salmoneus
- Sanno
- Posts: 3197
- Joined: Thu Jan 15, 2004 5:00 pm
- Location: One of the dark places of the world
Re: Venting thread
I'm not sure in what way you're disagreeing with me. If the price of something (in this case, everything) goes up, that's still business as normal so long as you buy the same things from the same shop. Having a bit more of one thing or a bit less of another is not a radical change, particularly when they're combined with tendencies in the opposite direction (I think it's at least reasonable to hope, and some would say it's reasonable to be confident that, human suffering from other political causes will continue to decline in coming decades as it has recently, so that the problems caused by climate change merely represent a slower rate of progress, on the global scale, rather than a real-terms decline).zompist wrote:There's a huge range of outcomes between "serious threat to the survival of civilization" and "business as normal". You haven't made a good case that global warming will be "business as normal", and simply stating that it will not end civilization is saying very little at all. Lots of things could be catastrophic without ending civilization.Salmoneus wrote:Again, I never said that climate change would not be expensive. Just that it will not represent a serious threat to the survival of civilisation, as rotting puts it. And indeed that it will largely be business as normal.
I haven't insinuating anything about what you've said, because so far as I'm concerned you've just popped into the conversation uninvited. I mean, you're welcome to participate, of course, but don't pretend you're the centre of the universe and we're all talking about you. My issues were strictly with rottingham's view of climate change as apocalyptic - what you yourself characterise as "exaggerations and apocalyptic fantasies" - and with his suggestion of a Robodraco as a solution (to this or to similar future problems), which you yourself characterise as a "barmy" theory of an "AI-god" (robot Draco is actually a much more specific and sympathetic shorthand, I think!).Oh jeez. Look, for reference, here is a list of things I absolutely did not say, and so I am absolutely not under any obligation to reply to your insinuations that I did:
--"future climate change [is] a unique, unprecedented problem"
--that climate change is the one and only source of refugees
--that no one should "give a shit" about "poverty, dying of disease, being flooded by cyclones, being slaughtered in a brutal war"
--that "the significance of human suffering in the third world can be weighted precisely to the specific political issues that being discussed on Western university campuses"
--anything whatsoever about "Robot-Draco"
If you don't agree with these things, don't get personally offended when I disagree with them!
These are called analogies; don't be disingenuous.--anything whatsoever about World War II, Colombia, Iraq, or the Irish Famine
Sure, but none of that seems relevent to me in defending these apocalyptic fantasies and barmy AI gods. At no point have I said we shouldn't worry about, or do something about, climate change. I've just said it's not apocalyptic, and not even near the top of the list of biggest problems we face.That people are somehow too concerned about climate change is... well, an odd position... but I'd invite you to consider that human beings are particularly frustrated when a problem is not addressed, or barely so. There has been a good deal of effort in the last 70 years to address poverty, world wars, pollution, and disease-- you can certainly argue that it isn't enough, but quite a bit of progress has been made. Climate change scares people because we are not doing enough to address it, and (in the US) because half the country not only doesn't want to address it, but wants to pretend there's no problem to address.
That seems to me just to further strengthen the non-apocalypse side of the argument. Things are improving. Some things will not improve as fast as we'd like, if we don't do something about climate change. But civilisation will survive.And while you're terribly worried about "respecting" Third Worlders, it's rather disrespectful to paint a picture of South Asia as always and inevitably starving. Re Bangladesh: "In 1991, well over 40% of the population lived in extreme poverty. Today, the World Bank says that less than 14% still does."
And we both know that the travel ban is not a muslim ban, and that discouraging muslim migration is not a muslim ban, and we both disagree with Trump's policies, so let's not descend in sloganeering. Trump obviously wants to reduce the number of Muslims entering the country, but he has not even attempted to ban all Muslims from entering the country - which was his campaign promise. You know this, I know this. I'm not even sure what you're trying to argue about now.You're misinformed; the Supreme Court allowed the latest version of the ban to be implemented. Plus, the administration has more tools at its disposal than the ban, and these are throttling the flow of Muslim refugees.that the "Muslim Ban", which of course was never implemented,
Blog: [url]http://vacuouswastrel.wordpress.com/[/url]
But the river tripped on her by and by, lapping
as though her heart was brook: Why, why, why! Weh, O weh
I'se so silly to be flowing but I no canna stay!
But the river tripped on her by and by, lapping
as though her heart was brook: Why, why, why! Weh, O weh
I'se so silly to be flowing but I no canna stay!
-
- Avisaru
- Posts: 370
- Joined: Wed Mar 30, 2005 4:22 pm
- Location: UK
Re: climate change
Or, even simpler, the OTAI may have to write itself. Er...zompist wrote:For instance, you can't fix things by explaining that "mere meatsacks won't write the One True AI; AIs will write it!"
- Salmoneus
- Sanno
- Posts: 3197
- Joined: Thu Jan 15, 2004 5:00 pm
- Location: One of the dark places of the world
Re: climate change
I absolutely agree with the thrust of your concerns. In particular, I think the fatal conceptual flaw (not to discount the other practical obstacles you bring up, which are certainly going nowhere in the near future) is that any AI that humans are able to strictly regulate the behaviour of in all circumstances is an AI that is not able to to arrive at any startling conclusions that humans cannot arrive at, while any AI that humans CANNOT ensure control of in all circumstances is an AI that cannot be trusted to always yield decisions that abide by human morality. In particular, since human morality is the paradigm example of an unsolved problem - there is a radical lack of consensus on both big theoretical questions and small practical questions of morality - no AI can be programmed to obey human morality. It can obey the morality of one programmer, but if that programmer is fallible or controversial (and they must be, because they're human) then so is the AI. And, as you say, this original sin is perpetuated no matter how many generations of AI-designed AIs there are. And on a practical level, if an AI is somehow so 'advanced' that a human couldn't match it, then it will also struggle to convincingly explain its theories to the public in order to gain public consent. At the moment, we live in a society in which Paul Krugman can't convince people of basic economic principles they could learn in weeks if they cared to try - I can't imagine that if Krugman told them "look, my computer software tells us this is the best thing, even though even I don't understand why, but let's just assume it's not glitching (although it's so complicated that there's no way to tell if it is!) and put all human life on the line on betting that it's right!" he would find it a particularly easy sell...zompist wrote: -- Programs do not magically acquire the ability to solve problems their programmers can't solve.*
* You may think this is obviously wrong-- can I really beat the world's top chess program? But of course I can, if I'm given enough time. (All I have to do is see how many moves it looks ahead, and use the same algorithm, but look one more move ahead.) The secret weapon of the computer is speed. So when you're looking at domains to throw a computer at, it will generally do well when speed is the major factor. So you absolutely want computer assistance when firing at an incoming rocket. Speed is not the major factor in government.
But! I just wanted to point out that your characterisation of computer chess was valid only up to last month. As I understand it, AlphaZero (the world's best chess player (and Go player, and shogi player, etc)) doesn't primarily work through "looking ahead" like a conventional chess computer (because she was originally designed for Go, in which 'looking ahead' (calculating trees of possible future moves) is an impractical approach due to the number of possible moves). Instead, she works primarily through a highly refined intuitive positional evaluation system.
Basically, when you're deciding on a move in chess, we can imagine a two-step process:
- calculate a certain number of possible moves and resulting positions, taking into account likely responses
- assigning values to the resulting positions, and picking the option that is most likely to lead to the highest-value positions.
There are thus three ways to improve decision-making:
- evaluate more positions each turn (a question of raw computing power)
- concentrate your evaluations on more relevent positions, ignoring those that are of little interest
- assign more accurate values to the positions
Other than the brute power of the first way, which is less of an issue now (most chess software is more limited by the software than the hardware these days), most computer chess, AIUI, has concentrated on the second way.
Alpha, however, is a revolutionary advance along the third way: she doesn't necessarily "look ahead more moves", but when she looks at a position she has a more accurate impression of how desirable it is than previous computers have had. This is also the approach humans take - we can't evaluate many positions per turn, but we're very good at evaluating each position "intuitively". Alpha is revolutionary because she mimics, and infinitely surpasses, human intuitive evaluations. Alpha, like a human, knows "what good positions look like", and steers her calculations toward them, so she doesn't have to look as many moves as other computers - but she has a better sense of good positioning than humans have.
In human terms, traditional chess computers were more like Kasparov, whereas AlphaZero is more like Karpov. Or more accurately, she's like Petrosian (who was famously said to always be able smell danger 20 moves ahead), only much, much more aggressive. Notably, like Petrosian, she's much more willing than most humans to offer big sacrifices in exchange for positional improvements, and she's also willing to sacrifice apparent what would usually be a good position for what is actually a better position in the unique circumstances of each position - humans rely too much on heuristics about piece values and areas of the board and suchlike, which Alpha often ignores in favour of her own heuristic principles (discovering what those principles are will probably be the focus of a lot of chess theory for years to come). But of course, she's also still capable of much deeper analysis than a human, so she still has the tactical brilliance of a computer, just now it's added to the strategic brilliance of a superhuman: Petrosian and Tal united in one player.
No exaggeration: as stunned as chess experts were by the early computer programmes and the way they played, they're even more stunned by what AlphaZero does. Conventional computer chess is like human chess with fewer eras and more tactical ingenuity; Alpha's chess apparently looks like it's from another world. Which is much more exciting. Grandmasters aren't that excited about conventional computer chess, because they know they can't emulate it - they can't calculate like that. Whereas AlphaZero chess may actually teach humans something, because it implies that our strategic approaches, rather than just our tactical calculations, have been flawed. It's also interesting because people had thought chess was mostly solved: the best traditional computer players overwhelmingly draw against one another, so seeing a new approach come along and thrash the old guard is kind of remarkable.
Of course, she isn't magic. Like humans and old computers, she's learned her positional theory from observation of chess games - positions, and every tiny feature of positions, that she sees lead to wins she values more highly, and those she sees lead to defeats she devalues. Apparently, she's not even all that sophisticated in how she analyses the games she plays - in terms of how many features she monitors in each position and so on. But whereas earlier computers and humans primarily analysed 'real' games, she has played innumerable games against herself and her sisters and developed her heuristics through trial and error over many more experiments than humanity has so far been able to study in the flesh.
[the frightening fact: AlphaZero went from not knowing the rules of chess, to having learnt enough to thrash the world's best computer players (she beat Stockfish 28-0 over 100 games, winning 50% of the games where it played white; Stockfish has just been through a tournament against the other best computer players and didn't lose a single game in over 50 games), in four hours.]
[ooh, a figure: in those matches, Stockfish (which comes complete with databases of opening positions and endgames) calculated 70 million positions every second. AlphaZero, which doesn't have any of that? Only 80 thousand per second. Which means that although she was trained on supercomputers, she can run on relatively basic hardware when actually performing]
[and a smug fact: AlphaZero, despite never having played against or even been told about human chess players, nonetheless has independently reinvented the most common human chess openings as its favourites. Which at least tells us we've been doing SOMETHING right! And intriguingly, her preference for different openings varied over time, as she got better.]
Which raises an interesting question, when we ask whether we can replicate her as humans, given enough time. Perhaps - but since she uses her time not to analyse positions in the game, but to learn between games, it doesn't seem like quite the same thing!
But yes, of course your general point is valid.
Blog: [url]http://vacuouswastrel.wordpress.com/[/url]
But the river tripped on her by and by, lapping
as though her heart was brook: Why, why, why! Weh, O weh
I'se so silly to be flowing but I no canna stay!
But the river tripped on her by and by, lapping
as though her heart was brook: Why, why, why! Weh, O weh
I'se so silly to be flowing but I no canna stay!
Re: climate change
Point taken about AlphaZero. Neural nets can do some amazing and baffling things. However, they're susceptible to the same argument: give me the program and let me increase the number of neurons, and a huge amount of time, and I can beat it. (This is of course exactly what DeepMind is doing as it creates new versions of its game-playing programs.)
The way these algorithms think is based on human brains, and yet is baffling to human consciousness. There's no nice rules like "If you have a queen on the board, add 10 points to your score." There's nodes and weights and what they mean can be mysterious.
So, I recommend this article by Rodney Brooks to everyone to get a handle on how these things work, and what their limitations are. They are still very much tied to their human programmers and their understanding of the problem domain.
Even more relevant to this thread is his article "Seven deadly sins of predicting the future of AI".
The way these algorithms think is based on human brains, and yet is baffling to human consciousness. There's no nice rules like "If you have a queen on the board, add 10 points to your score." There's nodes and weights and what they mean can be mysterious.
So, I recommend this article by Rodney Brooks to everyone to get a handle on how these things work, and what their limitations are. They are still very much tied to their human programmers and their understanding of the problem domain.
Even more relevant to this thread is his article "Seven deadly sins of predicting the future of AI".
- Salmoneus
- Sanno
- Posts: 3197
- Joined: Thu Jan 15, 2004 5:00 pm
- Location: One of the dark places of the world
Re: climate change
Thanks, I'll read those later. Personally, having long been on the side of "don't get carried away!" about AI, DeepMind is starting to make me reconsider that. But I don't know enough to really judge one way or another.zompist wrote:Point taken about AlphaZero. Neural nets can do some amazing and baffling things. However, they're susceptible to the same argument: give me the program and let me increase the number of neurons, and a huge amount of time, and I can beat it. (This is of course exactly what DeepMind is doing as it creates new versions of its game-playing programs.)
The way these algorithms think is based on human brains, and yet is baffling to human consciousness. There's no nice rules like "If you have a queen on the board, add 10 points to your score." There's nodes and weights and what they mean can be mysterious.
So, I recommend this article by Rodney Brooks to everyone to get a handle on how these things work, and what their limitations are. They are still very much tied to their human programmers and their understanding of the problem domain.
Even more relevant to this thread is his article "Seven deadly sins of predicting the future of AI".
Of course, the problem in moving from a chess machine to a god is the problem of trusting the machine to be correct in its moral values, not its predictions. I do think (as I think you do, iirc?) that a machine may potentially eventually be of great use in formulating economic policy (i.e. running economic simulations), although even there I'm somewhat skeptical (not of computers, but of the complexity of economic problems).
Blog: [url]http://vacuouswastrel.wordpress.com/[/url]
But the river tripped on her by and by, lapping
as though her heart was brook: Why, why, why! Weh, O weh
I'se so silly to be flowing but I no canna stay!
But the river tripped on her by and by, lapping
as though her heart was brook: Why, why, why! Weh, O weh
I'se so silly to be flowing but I no canna stay!
Zizekian We
@rotting bones: The planet is totally capable of housing and feeding us all, and indeed ten or twenty times as many people as we currently have, at present technological levels: people go hungry and homeless because capitalism excludes, not because we can't find enough place or grow enough food: indeed, its a commonly cited number, and though I'm not myself sure it rings more or less true, that 40% of all food produced is wasted. Climate change will be quite a humanitarian catastrophe, don't get me wrong, but it won't be civilization-threatening at all. We can deal with losing real estate in coastal cities: indeed, many places (like my own country of Chile) will see crop yields rise, not drop, and though warming might kill a lot of sea life and thus make fish scarcer, veggies grow quite well in warm and humid places. I'm honestly more concerned with nuclear war than with global warming as far as end-of-the-world-as-we-know-it scenarios, even if its a bit old-fashioned. I mean, regarding living space especially, have you traveled by plane lately (I just got off one today, incidentally) and let me tell you, there's a LOOOOT of room no one lives or grows food at.
I don't think we can change the economy to prevent future catastrophes: the economy consists, ultimately, on people exerting their will unto the world in order for "good" (desired) outcomes to come to pass, so we won't have an economy that *doesn't* have an impact on the biosphere, the atmosphere, etcetera: and as we become more technologically advanced, that impact only goes up. I think that the inevitable outcome is the total domestication of the planet, turning it if not into an ecumenopolis, at least into a big ass garden/farm/solar collector/whatever. And all of this is being done for the first time, so its bound to have some hiccups along the way: hell, we may even have already broken the world biosphere without repair with the bees or something else entirely for that matter.
I don't think we can change the economy to prevent future catastrophes: the economy consists, ultimately, on people exerting their will unto the world in order for "good" (desired) outcomes to come to pass, so we won't have an economy that *doesn't* have an impact on the biosphere, the atmosphere, etcetera: and as we become more technologically advanced, that impact only goes up. I think that the inevitable outcome is the total domestication of the planet, turning it if not into an ecumenopolis, at least into a big ass garden/farm/solar collector/whatever. And all of this is being done for the first time, so its bound to have some hiccups along the way: hell, we may even have already broken the world biosphere without repair with the bees or something else entirely for that matter.
The short and sweet answer is: the same ones we have now: climate change will not put new biomes into existence (hopefully), it'll just turn deserts into chaparrals, chaparrals into deserts, jungles into savannas, forests into swamps and so on and so forth: we already know how to grow all the things we might want to eat in all of those environments. People in the pacific coast of chile, if the humboldt current reverses, will see more rain, and thus some of that desert will become more similar to the argentinian pampas, with plenty of rain, and thus people will start growing cereals and fruits, and building rainwater collectors, whereas perhaps some of the corn belt in the US will become too dry and thus the farmers there will have to either figure out how to grow corn in the new conditions, or move on to, say, olives.what agricultural methods will we use to deal with climate change?
Then you are against all possible plans to transcend capitalism, for nothing that's never been tried is certain to produce only good results. and yes, I would very much like for you (or myself, indeed I'd buy some land in zones currently non-farmable that will, by my estimation, become farmable in future if it wasn't so damned expensive) to profit from catastrophe if that profit comes hand-in-hand with solutions to problems: I'm not all for private profit through ownership of the means of production, but I'm even less for newly arable land to go fallow.You want me to profit from human misery ("if you think this is likely, go buy up land in Alaska!"), and you are accusing me of being unethical? I am against any plan to transcend capitalism that is not certain of producing good results.
Re: climate change
I shan't dignify the "is sal a murderous fascist/is rotting a nazi" conversation beyond neither of them are that and lets all chill.
As for the impact of the migration: its important to keep in mind the distinction between "makes the news and changes some first worlder's lives, maybe brings neonazism into the mainstream again" and "threates to destroy civilization". The whole Syrian crisis has, for sure, displaced a lot of people: incidentally, a small proportion of those millions of people have found their way into the richest chunk of the world right as protracted economic crisis has eroded the standard of living of its middle and lower classes: subsequently, many people in those countries have become angry at having to take in migrants, and perhaps there have been a few terrorist attacks the impact of which has been mainly in the news (I don't properly have the numbers, but I'd bet more people have died from diabetes than from terrorist attacks since the start of the Syrian conflicts in Europe, and you can't blame terrorism on the Syrian crisis either, but rather on US foreign policy and other such things). It's not a good thing that Syria is at war and millions get displaced, but no sane and well-informed analyst would suggest this is even close to compromising the current global order, let alone civilization itself. In terms of human cost, imperialism in Africa, the World Wars, and probably even the Cold War have been more dire than global warming is going to be, though not by much.
___________
as for AI, I'm honestly not buying it, for three distinct reasons.
> one, come off it, computers are really really really really really dumb and AI is always ten years away. The whole "singularity" and "omg we're gonna program god" is like neolithic farmers saying that if we continue to selectively breed wheat it'll conquer the whole world and make them its slaves.
> computers are weird, and intelligence is not a simple, linear magnitude. computers are already smarter than us at some things, and dumber than us at other things
> being smarter than everyone else isn't all its cracked up to be. seriously.
As for the impact of the migration: its important to keep in mind the distinction between "makes the news and changes some first worlder's lives, maybe brings neonazism into the mainstream again" and "threates to destroy civilization". The whole Syrian crisis has, for sure, displaced a lot of people: incidentally, a small proportion of those millions of people have found their way into the richest chunk of the world right as protracted economic crisis has eroded the standard of living of its middle and lower classes: subsequently, many people in those countries have become angry at having to take in migrants, and perhaps there have been a few terrorist attacks the impact of which has been mainly in the news (I don't properly have the numbers, but I'd bet more people have died from diabetes than from terrorist attacks since the start of the Syrian conflicts in Europe, and you can't blame terrorism on the Syrian crisis either, but rather on US foreign policy and other such things). It's not a good thing that Syria is at war and millions get displaced, but no sane and well-informed analyst would suggest this is even close to compromising the current global order, let alone civilization itself. In terms of human cost, imperialism in Africa, the World Wars, and probably even the Cold War have been more dire than global warming is going to be, though not by much.
___________
as for AI, I'm honestly not buying it, for three distinct reasons.
> one, come off it, computers are really really really really really dumb and AI is always ten years away. The whole "singularity" and "omg we're gonna program god" is like neolithic farmers saying that if we continue to selectively breed wheat it'll conquer the whole world and make them its slaves.
> computers are weird, and intelligence is not a simple, linear magnitude. computers are already smarter than us at some things, and dumber than us at other things
> being smarter than everyone else isn't all its cracked up to be. seriously.
- Salmoneus
- Sanno
- Posts: 3197
- Joined: Thu Jan 15, 2004 5:00 pm
- Location: One of the dark places of the world
Re: climate change
I wasn't kidding about Alaska, btw. The worst-case projections that have alarming sea level rises also predict that much of Alaska, a truly vast area that is currently almost entirely unused, will be suitable for arable farming within the next century or so.
Regarding AI: I'm not so sure that computers, or at least Alpha, really is that dumb. The scary thing about Alpha isn't just that it's brilliant at several things - it's that the same algorithm has learnt to be brilliant at several things, and the thrust of the research is to make it brilliant at more things. Alpha has gone from being the best in the world at Go to being the best in the world at all known-information non-probabilistic games. Next up is Starcraft II, apparently - within a few years, Alpha will probably be the best in the world at all non-probabilistic concealed-information games. Then it's hard to imagine it won't rapidly become the best in the world at most of the probabilistic games as well. And all this may look like a bit of fun, except that Google aren't interested in Starcraft in its own right; they're interested in things like financial speculation, which is itself a well-defined game that Alpha will presumably come to dominate within, say, 10 or (best-case) 20 years (probably much less - every step so far as been quicker than anyone expected).
Where AI will find it harder is dealing with individual humans, whose behaviour is so suboptimal and idiosyncratic that it's much harder to predict (whereas games like chess you can mostly adopt the assumption that your opponant is at least trying to win and has some idea of how to do that). But, combined with massive surveillance data, this doesn't seem an unsolvable problem either.
Of course, AI can't do very much unless people let it. Which is why the scariest AIs are the things that governments are giving power to. The US, for instance, already uses an AI to draw up its kill lists, which humans then rubber-stamp.
Regarding AI: I'm not so sure that computers, or at least Alpha, really is that dumb. The scary thing about Alpha isn't just that it's brilliant at several things - it's that the same algorithm has learnt to be brilliant at several things, and the thrust of the research is to make it brilliant at more things. Alpha has gone from being the best in the world at Go to being the best in the world at all known-information non-probabilistic games. Next up is Starcraft II, apparently - within a few years, Alpha will probably be the best in the world at all non-probabilistic concealed-information games. Then it's hard to imagine it won't rapidly become the best in the world at most of the probabilistic games as well. And all this may look like a bit of fun, except that Google aren't interested in Starcraft in its own right; they're interested in things like financial speculation, which is itself a well-defined game that Alpha will presumably come to dominate within, say, 10 or (best-case) 20 years (probably much less - every step so far as been quicker than anyone expected).
Where AI will find it harder is dealing with individual humans, whose behaviour is so suboptimal and idiosyncratic that it's much harder to predict (whereas games like chess you can mostly adopt the assumption that your opponant is at least trying to win and has some idea of how to do that). But, combined with massive surveillance data, this doesn't seem an unsolvable problem either.
Of course, AI can't do very much unless people let it. Which is why the scariest AIs are the things that governments are giving power to. The US, for instance, already uses an AI to draw up its kill lists, which humans then rubber-stamp.
Blog: [url]http://vacuouswastrel.wordpress.com/[/url]
But the river tripped on her by and by, lapping
as though her heart was brook: Why, why, why! Weh, O weh
I'se so silly to be flowing but I no canna stay!
But the river tripped on her by and by, lapping
as though her heart was brook: Why, why, why! Weh, O weh
I'se so silly to be flowing but I no canna stay!
Re: climate change
I feel like the rate should be slowed so that we can get to this paradise with minimum catastrophe and plenty of time for adaptation.
ìtsanso, God In The Mountain, may our names inspire the deepest feelings of fear in urkos and all his ilk, for we have saved another man from his lies! I welcome back to the feast hall kal, who will never gamble again! May the eleven gods bless him!
kårroť
kårroť
-
- Avisaru
- Posts: 409
- Joined: Thu Sep 07, 2006 12:25 pm
Re: climate change
zompist: But these are completely different from Sal's objections. Everyone agrees that I'm wrong. Everyone disagrees on why I'm wrong.
1. I don't understand what difference these reasons make at the level of technical work. Look, suppose I have gathered a thousand years of data on the effects that policies have on governance. I have 3rd millennium hardware on which I want to train a neural network. The network will accept the present state of a country as input and print the policy likely to benefit the least advantaged as output. I don't understand how your reasons apply to this task.
2. But speed is the quality I want to use computers for. Why else would I use computers? Without computation, it is even difficult to tell how various qualities of different countries are correlated with each other, as we've seen on the ZBB. I think it is very difficult to find the right course of action within a maze of conflicting data, and the AI would help in sorting through it all and telling us what we ought to do if we were to consistently follow our own rules.
3. I don't understand what turning yourself into a program has to do with anything. Even after we compute as much as possible, questions might still remain regarding which policies to follow. We could vote on policies or algorithms at that point, rather than voting on which oligarchs or mobsters we distrust the least.
To me, this approach seems more democratic than our current system, which the Greeks would have called an oligarchy. There is no moral prestige in running an oligarchy, only practical benefits. Why wouldn't it be more democratic to let the practical side of policy matters be handled by an AI? Rule by AI is not a dictatorship because the AI is not a person. It is more analogous to a book of laws, a scientific Quran. I think that we should base our code of laws primarily on Platonic forms rather than human traditions. These forms are: 1. Moral principles like maximin, which even Piketty advocated in his book, if you remember. 2. Computing power. 3. Algorithms of empirical induction like the ones for training neural networks.
This position might be different enough from that of the singularitarians that it were best if I stopped calling myself that.
1. I don't understand what difference these reasons make at the level of technical work. Look, suppose I have gathered a thousand years of data on the effects that policies have on governance. I have 3rd millennium hardware on which I want to train a neural network. The network will accept the present state of a country as input and print the policy likely to benefit the least advantaged as output. I don't understand how your reasons apply to this task.
2. But speed is the quality I want to use computers for. Why else would I use computers? Without computation, it is even difficult to tell how various qualities of different countries are correlated with each other, as we've seen on the ZBB. I think it is very difficult to find the right course of action within a maze of conflicting data, and the AI would help in sorting through it all and telling us what we ought to do if we were to consistently follow our own rules.
3. I don't understand what turning yourself into a program has to do with anything. Even after we compute as much as possible, questions might still remain regarding which policies to follow. We could vote on policies or algorithms at that point, rather than voting on which oligarchs or mobsters we distrust the least.
To me, this approach seems more democratic than our current system, which the Greeks would have called an oligarchy. There is no moral prestige in running an oligarchy, only practical benefits. Why wouldn't it be more democratic to let the practical side of policy matters be handled by an AI? Rule by AI is not a dictatorship because the AI is not a person. It is more analogous to a book of laws, a scientific Quran. I think that we should base our code of laws primarily on Platonic forms rather than human traditions. These forms are: 1. Moral principles like maximin, which even Piketty advocated in his book, if you remember. 2. Computing power. 3. Algorithms of empirical induction like the ones for training neural networks.
This position might be different enough from that of the singularitarians that it were best if I stopped calling myself that.
Last edited by rotting bones on Sun Jan 07, 2018 2:28 pm, edited 3 times in total.
If you hold a cat by the tail you learn things you cannot learn any other way. - Mark Twain
In reality, our greatest blessings come to us by way of madness, which indeed is a divine gift. - Socrates
In reality, our greatest blessings come to us by way of madness, which indeed is a divine gift. - Socrates
-
- Avisaru
- Posts: 409
- Joined: Thu Sep 07, 2006 12:25 pm
Re: climate change
How are we going to get plenty of time? That link doesn't mention the transition period, which is precisely what interests me.mèþru wrote:I feel like the rate should be slowed so that we can get to this paradise with minimum catastrophe and plenty of time for adaptation.
If you hold a cat by the tail you learn things you cannot learn any other way. - Mark Twain
In reality, our greatest blessings come to us by way of madness, which indeed is a divine gift. - Socrates
In reality, our greatest blessings come to us by way of madness, which indeed is a divine gift. - Socrates
-
- Avisaru
- Posts: 409
- Joined: Thu Sep 07, 2006 12:25 pm
Re: Zizekian We
Right, and where are the refugees from Brazil going to go?Torco wrote:indeed, many places (like my own country of Chile) will see crop yields rise, not drop, and though warming might kill a lot of sea life and thus make fish scarcer, veggies grow quite well in warm and humid places.
I don't see why we can't be concerned about both.Torco wrote:I'm honestly more concerned with nuclear war than with global warming as far as end-of-the-world-as-we-know-it scenarios, even if its a bit old-fashioned.
I think you're conflating change and catastrophe. We should change the environment, but we should not change the environment in ways that are catastrophic to us, which is what we are doing now. We have never had the power to do this on a planetary scale in human history, which is why we should consider revising the way we perform economic calculations.Torco wrote:I don't think we can change the economy to prevent future catastrophes: the economy consists, ultimately, on people exerting their will unto the world in order for "good" (desired) outcomes to come to pass, so we won't have an economy that *doesn't* have an impact on the biosphere, the atmosphere, etcetera: and as we become more technologically advanced, that impact only goes up.
I doubt it. Climate change will bring a great increase in storms and unstable weather. It will also require us to change what is grown where every decade or so in a coordinated manner. There is currently no system I know of to let us do these things. You know what Guns, Germs and Steel says?Torco wrote:The short and sweet answer is: the same ones we have now: climate change will not put new biomes into existence (hopefully), it'll just turn deserts into chaparrals, chaparrals into deserts, jungles into savannas, forests into swamps and so on and so forth:
Our problem will be different, but similar.Australia’s climate is highly seasonal and varies from year to year far more than that of any other continent.
Agriculture was another nonstarter in Australia, which is not only the driest continent but also the one with the most infertile soils. In addition, Australia is unique in that the overwhelming influence on climate over most of the continent is an irregular nonannual cycle, the ENSO (acronym for E1 Niño Southern Oscillation), rather than the regular annual cycle of the seasons so familiar in most other parts of the world. Unpredictable severe droughts last for years, punctuated by equally unpredictable torrential rains and floods. Even today, with Eurasian crops and with trucks and railroads to transport produce, food production in Australia remains a risky business. Herds build up in good years, only to be killed off by drought. Any incipient farmers in Aboriginal Australia would have faced similar cycles in their own populations. If in good years they had settled in villages, grown crops, and produced babies, those large populations would have starved and died off in drought years, when the land could support far fewer people.
Nomadism, the hunter-gatherer lifestyle, and minimal investment in shelter and possessions were sensible adaptations to Australia’s ENSO-driven resource unpredictability. When local conditions deteriorated, Aborigines simply moved to an area where conditions were temporarily better. Rather than depending on just a few crops that could fail, they minimized risk by developing an economy based on a great variety of wild foods, not all of which were likely to fail simultaneously. Instead of having fluctuating populations that periodically outran their resources and starved, they maintained smaller populations that enjoyed an abundance of food in good years and a sufficiency in bad years.
Scientific certainty, not certainty transcending radical skepticism about existence, nonexistence and so forth.Torco wrote:Then you are against all possible plans to transcend capitalism, for nothing that's never been tried is certain to produce only good results.
https://www.youtube.com/watch?v=w3RDOSNnCZwTorco wrote:and yes, I would very much like for you (or myself, indeed I'd buy some land in zones currently non-farmable that will, by my estimation, become farmable in future if it wasn't so damned expensive) to profit from catastrophe if that profit comes hand-in-hand with solutions to problems: I'm not all for private profit through ownership of the means of production, but I'm even less for newly arable land to go fallow.
Last edited by rotting bones on Sun Jan 07, 2018 2:20 pm, edited 1 time in total.
If you hold a cat by the tail you learn things you cannot learn any other way. - Mark Twain
In reality, our greatest blessings come to us by way of madness, which indeed is a divine gift. - Socrates
In reality, our greatest blessings come to us by way of madness, which indeed is a divine gift. - Socrates
-
- Avisaru
- Posts: 409
- Joined: Thu Sep 07, 2006 12:25 pm
Re: climate change
I honestly don't understand this. I need you to explain to me the sense in which refusing to see a radical difference in the death of one person or a thousand people is not evidence of a murderous mentality. This assumption is ingrained in Sal's arguments that life for the inhabitants of the Sundarbans would not be radically worse with climate change as compared to what it was before.Torco wrote:I shan't dignify the "is sal a murderous fascist/is rotting a nazi" conversation beyond neither of them are that and lets all chill.
Do you realize that human civilization is, to a large extent, a coastal phenomenon, and that there will be constant and increasing migrations from every corner of the globe at the same time with no end in sight? Humans are also concentrated in the north, which is a problem because sea levels will rise in the north when Antarctica melts in the south and ceases to exert gravitational pull on the ocean.Torco wrote:As for the impact of the migration: its important to keep in mind the distinction between "makes the news and changes some first worlder's lives, maybe brings neonazism into the mainstream again" and "threates to destroy civilization". The whole Syrian crisis has, for sure, displaced a lot of people: incidentally, a small proportion of those millions of people have found their way into the richest chunk of the world right as protracted economic crisis has eroded the standard of living of its middle and lower classes: subsequently, many people in those countries have become angry at having to take in migrants, and perhaps there have been a few terrorist attacks the impact of which has been mainly in the news (I don't properly have the numbers, but I'd bet more people have died from diabetes than from terrorist attacks since the start of the Syrian conflicts in Europe, and you can't blame terrorism on the Syrian crisis either, but rather on US foreign policy and other such things). It's not a good thing that Syria is at war and millions get displaced, but no sane and well-informed analyst would suggest this is even close to compromising the current global order, let alone civilization itself. In terms of human cost, imperialism in Africa, the World Wars, and probably even the Cold War have been more dire than global warming is going to be, though not by much.
But the problem we're trying to solve, coordinated resource allocation, is a computational problem. What does enslavement have to do with it? Also, we are slaves to wheat. Wheat dies, we die. That is the problem. Also, we can selectively breed wheat to be our literal overlords, theoretically. We have to breed them into unicellular organisms. Breed those with animal characteristics. Breed them back up to multicellular organisms. Breed those to be superior to humans in every way. More trouble than it's worth in so many ways, but why not?Torco wrote:as for AI, I'm honestly not buying it, for three distinct reasons.
> one, come off it, computers are really really really really really dumb and AI is always ten years away. The whole "singularity" and "omg we're gonna program god" is like neolithic farmers saying that if we continue to selectively breed wheat it'll conquer the whole world and make them its slaves.
> computers are weird, and intelligence is not a simple, linear magnitude. computers are already smarter than us at some things, and dumber than us at other things
> being smarter than everyone else isn't all its cracked up to be. seriously.
Last edited by rotting bones on Sun Jan 07, 2018 3:09 pm, edited 2 times in total.
If you hold a cat by the tail you learn things you cannot learn any other way. - Mark Twain
In reality, our greatest blessings come to us by way of madness, which indeed is a divine gift. - Socrates
In reality, our greatest blessings come to us by way of madness, which indeed is a divine gift. - Socrates
-
- Avisaru
- Posts: 409
- Joined: Thu Sep 07, 2006 12:25 pm
Re: climate change
Contra my lord and master, Yudkowsky the Eternally Radiant or Whatever, I still don't think world threatening GAI will be possible in my lifetime. What do you think of this article? https://medium.com/@josecamachocollados ... 66ae1c84f2zompist wrote:Point taken about AlphaZero. Neural nets can do some amazing and baffling things. However, they're susceptible to the same argument: give me the program and let me increase the number of neurons, and a huge amount of time, and I can beat it. (This is of course exactly what DeepMind is doing as it creates new versions of its game-playing programs.)
The way these algorithms think is based on human brains, and yet is baffling to human consciousness. There's no nice rules like "If you have a queen on the board, add 10 points to your score." There's nodes and weights and what they mean can be mysterious.
So, I recommend this article by Rodney Brooks to everyone to get a handle on how these things work, and what their limitations are. They are still very much tied to their human programmers and their understanding of the problem domain.
Even more relevant to this thread is his article "Seven deadly sins of predicting the future of AI".
For the pro side, there's SSC on AI risk: http://slatestarcodex.com/2015/05/22/ai ... n-ai-risk/ (https://www.google.com/search?q=slatest ... irefox-b-1)
If you hold a cat by the tail you learn things you cannot learn any other way. - Mark Twain
In reality, our greatest blessings come to us by way of madness, which indeed is a divine gift. - Socrates
In reality, our greatest blessings come to us by way of madness, which indeed is a divine gift. - Socrates
-
- Avisaru
- Posts: 409
- Joined: Thu Sep 07, 2006 12:25 pm
Re: climate change
Sal: Your post makes no sense whatsoever as science, as philosophy or as casual conversation. In brief:
I feel dirty for having to say this, and I hate you for making me say it. Jesus Christ, there is more hellish torment awaiting me in this post? I give up. Go ahead and feel as superior as you like. I can't do this anymore.
I don't think we should create an AI to handle climate change, goddammit!Salmoneus wrote:This just makes no practical sense. You think our failure to correctly value long-term costs can be solved by... slightly rejigging our constitutions!? In a way that only a computer could calculate?
Yes, we do. I'm using a theory of justification by natural experiment. Since I have Guns, Germs and Steel open:Salmoneus wrote:...no, we don't.
On this basis, my argument proposes that we eventually train a neural network to maximize minimum wish fulfillment. Incredibly, you seem to be confusing my theory of justification for a decision procedure. I didn't think humans were capable of this particular confusion.How can students of human history profit from the experience of scientists in other historical sciences? A methodology that has proved useful involves the comparative method and so-called natural experiments. While neither astronomers studying galaxy formation nor human historians can manipulate their systems in controlled laboratory experiments, they both can take advantage of natural experiments, by comparing systems differing in the presence or absence (or in the strong or weak effect) of some putative causative factor. For example, epidemiologists, forbidden to feed large amounts of salt to people experimentally, have still been able to identify effects of high salt intake by comparing groups of humans who already differ greatly in their salt intake; and cultural anthropologists, unable to provide human groups experimentally with varying resource abundances for many centuries, still study long-term effects of resource abundance on human societies by comparing recent Polynesian populations living on islands differing naturally in resource abundance.
I don't understand why you think I should care.Salmoneus wrote:The fact that you can model something as if it followed some algorithmic procedure does not mean that it does.
On the contrary, adding every possible qualification makes language more difficult to understand for most people.Salmoneus wrote:It "matters" because if you use words in an inconsistent way, the sentences you form are incoherent.
I know exactly what I'm talking about. Your reading comprehension skills are so poor that you should probably see a doctor.Salmoneus wrote:You may well know how to program a computer. But you are advancing theories in psychology, philosophy, and political theory, and you clearly do not know what you are talking about; being able to program a computer does not give you omniscience.
Nope, there is no goal post shifting. As usual, your head is in the clouds. Satan is the least advantaged dying with their wishes unfulfilled. You could be the nail on his little toe.Salmoneus wrote:Well, I've never been "Satan" before, but I'll take that as a compliment. However, what you are doing here is your usual goalpost shifting - you say one thing, and then back away as soon as challenged, pretending to have said something else entirely.
I take back what I said about studying computer science. You should study analytic philosophy after your doctor's appointment. In particular, you should come back after you have read Rawls. If you have read him once, you should go back and read him again. Every point I've made is standard in the literature.Salmoneus wrote:You say "I'm not imposing my particular overarching vision", but you also say "I prefer democracy for a reason. What if I could have an AI which ran that reason as its algorithm?" and "how could that AI be totalitarianism? Non-totalitarianism is part of the algorithm I use to pick democracy." These things are obviously incompatible. You cannot impose rule by an AI that is programmed to follow your priorities and overarching vision, and then say you're not imposing your overarching vision.
I will bet you anything that a smart eleven year old is capable of refuting your "logical" point. This is so basic, I don't even know how to start explaining this. Okay, suppose there is a metric M. There is a series of government types G1 to Gn, for which Gx scores Mx. Now suppose for Gm, the score is Mm, the highest value for all M1 to Mn. That is the argument for choosing Gm. Now we add a Gn+1 such that Mn+1 > Mm. By our previous standards, we should switch from Gm to Gn+1.Salmoneus wrote:More particularly, whether your AI is popular or not is irrelevent to the purely logical point I made. Not a political point, but a logical point, which you have ignored. Your argument is logically flawed. It is fallacious.
I feel dirty for having to say this, and I hate you for making me say it. Jesus Christ, there is more hellish torment awaiting me in this post? I give up. Go ahead and feel as superior as you like. I can't do this anymore.
If you hold a cat by the tail you learn things you cannot learn any other way. - Mark Twain
In reality, our greatest blessings come to us by way of madness, which indeed is a divine gift. - Socrates
In reality, our greatest blessings come to us by way of madness, which indeed is a divine gift. - Socrates
Re: climate change
I agree with Sal that your AI-government is undemocratic. I wouldn't put things the way he did, but the idea that "the AI just implements what the people want!" is deeply dubious.rotting bones wrote:zompist: But these are completely different from Sal's objections. Everyone agrees that I'm wrong. Everyone disagrees on why I'm wrong.
If you're talking about a thousand years of data, you're writing science fiction. And that's OK, I like sf and sf discussions, but let's be clear that it is sf.1. I don't understand what difference these reasons make at the level of technical work. Look, suppose I have gathered a thousand years of data on the effects that policies have on governance. I have 3rd millennium hardware on which I want to train a neural network. The network will accept the present state of a country as input and print the policy likely to benefit the least advantaged as output. I don't understand how your reasons apply to this task.
And my answer would largely be contained in my own sf worldbuilding. To some extent, in the long run I agree with you: machines would be a powerful tool for simulating economies, and democracies will evolve past the 18th century limitation of only accepting or rejecting a single leader or coalition once every few years. In general, we use morality and untested heuristics when we don't have enough data, and eventually we'll have so much data on running a post-industrial economy that, AIs or no AIs, we'll be able to run one smoothly most of the time.
On the other hand, I think there will never be enough data on abnormal situations, so science (including AIs) can never solve all situations. Plus, in my sf future humans choose to be augmented with at least some computational power, so there is not a neat division into AIs and meatsacks. I don't agree that AIs will end up perfect and absolutely trustworthy; like humans; they will always be fallible and undeserving of absolute power. And finally, science can't replace all morality. An AI might tell you how a course of action might go; it can't tell you if people will like the result, much less whether they should choose that course of action.
It's good, and quite compatible with what Brooks is saying. Note that both emphasize that programmers had to analyze the problem quite a bit before the neural network could go to work.What do you think of this article? https://medium.com/@josecamachocollados ... 66ae1c84f2
One of Brooks's points is worth bearing in mind: we evaluate what other entities do by evaluating how smart a human would have to be to do them. Before the last century, nothing but a human could play chess, so our natural conclusion is that a chess-playing machine must be as smart as a human.
But, we have to retrain our intutions. Almost all programs are extremely stupid in most areas. AlphaZero can play go and chess. But it does not know that it is "playing a game". It would be baffled if you suddenly increased the size of the board to 10x10-- though a human could adapt easily. It cannot discuss chess masters with you. It can't explain why it chose a particular move. If you wanted to play for money, there would be no way to propose this to AlphaZero nor would it suddenly understand the concept of betting.
I think the fundamental disagreement here is that you want to extrapolate the power of programs indefinitely, until they're perfect. You assume that the problems with programs and programmers will somehow disappear. But those problems are likely to continue and even increase in complexity. Or be replaced by even more sophisticated and difficult problems.
(Edit:) Plus, I don't know why you assume that programs will be ever-more benign. Are programs being used benignly today? Major uses for programs today include increasing surveillance of people, targeted advertising, spambots, and military drones. And the programmers generally hold to a philosophy that they should be able to 'disrupt' any business whatsoever without following the regulations in that field.
- Salmoneus
- Sanno
- Posts: 3197
- Joined: Thu Jan 15, 2004 5:00 pm
- Location: One of the dark places of the world
Re: climate change
Because you said something that was not true. Normal people do care when they do that, certainly when they're building an argument upon it. I was giving you the benefit of the doubt.rotting bones wrote:I don't understand why you think I should care.Salmoneus wrote:The fact that you can model something as if it followed some algorithmic procedure does not mean that it does.
. I have read several things by Rawls. I have read a lot of arguments for and against Rawls. I have written papers on Rawls. Your appeals to authority, I find unconvincing.In particular, you should come back after you have read Rawls. If you have read him once, you should go back and read him again.
Zompist: fwiw, while I may not agree on all the details in your SF, the broad path you suggest here I find very plausible.
Blog: [url]http://vacuouswastrel.wordpress.com/[/url]
But the river tripped on her by and by, lapping
as though her heart was brook: Why, why, why! Weh, O weh
I'se so silly to be flowing but I no canna stay!
But the river tripped on her by and by, lapping
as though her heart was brook: Why, why, why! Weh, O weh
I'se so silly to be flowing but I no canna stay!
-
- Avisaru
- Posts: 409
- Joined: Thu Sep 07, 2006 12:25 pm
Re: climate change
I do not believe that "Humans can do X, which computers can't." or "The reasoning employed by humans is not algorithmic." are viable objections against AI for the following reason:
1. I believe the human brain arrives at its conclusions through physical processes. If you don't believe that, fair enough. The rest of this argument doesn't apply to you.
2. Suppose there is a physical process X that can be encoded as a computation Y such that Turing machines cannot perform Y. If that were the case, then we could enrich the repertoire of computations in our theory of algorithms by updating our account of universal computation from "Turing machine" to "Turing machine + computation Y, as observed in process X". For example, if quantum events are allowed, then that slightly rejiggles the boundaries of some lower order algorithmic classes in ambiguous ways.
3. Is there a process X that cannot be incorporated into computation in this way? Well, not physical processes that compute results, but there are "oracles", information about what works arrived at by following sequences of law-abiding events to their conclusion. It follows that the brain cannot deduce their conclusions any more than a computer can. Oracles can be obtained from empirical induction over processes that followed the relevant sequences.
What this discussion tells us is that: 1. If a human can do X, then a sufficiently powerful computer can do X by, in the worst case, reproducing algorithms corresponding to physical processes that the brain uses to reach its conclusions, and 2. Human reasoning doesn't have to be algorithmic for this to work because our theory of universal computation is an abstraction from physical theory that establishes a one-to-many mapping from all possible physical processes allowed under the laws of nature to the set of algorithms.
As for wish fulfillment by algorithm being dubious, I would not accept an algorithm that did not judge popular moods by organizing elections. I only want to learn when and how to hold elections, and what to do with the results by empirical induction over real world events. I don't want to enforce the popular will directly in order to avoid tyranny against minorities.
1. I believe the human brain arrives at its conclusions through physical processes. If you don't believe that, fair enough. The rest of this argument doesn't apply to you.
2. Suppose there is a physical process X that can be encoded as a computation Y such that Turing machines cannot perform Y. If that were the case, then we could enrich the repertoire of computations in our theory of algorithms by updating our account of universal computation from "Turing machine" to "Turing machine + computation Y, as observed in process X". For example, if quantum events are allowed, then that slightly rejiggles the boundaries of some lower order algorithmic classes in ambiguous ways.
3. Is there a process X that cannot be incorporated into computation in this way? Well, not physical processes that compute results, but there are "oracles", information about what works arrived at by following sequences of law-abiding events to their conclusion. It follows that the brain cannot deduce their conclusions any more than a computer can. Oracles can be obtained from empirical induction over processes that followed the relevant sequences.
What this discussion tells us is that: 1. If a human can do X, then a sufficiently powerful computer can do X by, in the worst case, reproducing algorithms corresponding to physical processes that the brain uses to reach its conclusions, and 2. Human reasoning doesn't have to be algorithmic for this to work because our theory of universal computation is an abstraction from physical theory that establishes a one-to-many mapping from all possible physical processes allowed under the laws of nature to the set of algorithms.
As for wish fulfillment by algorithm being dubious, I would not accept an algorithm that did not judge popular moods by organizing elections. I only want to learn when and how to hold elections, and what to do with the results by empirical induction over real world events. I don't want to enforce the popular will directly in order to avoid tyranny against minorities.
If you hold a cat by the tail you learn things you cannot learn any other way. - Mark Twain
In reality, our greatest blessings come to us by way of madness, which indeed is a divine gift. - Socrates
In reality, our greatest blessings come to us by way of madness, which indeed is a divine gift. - Socrates