Announcement

Collapse
No announcement yet.

The AI/Minority Report/Matrix thread

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    The AI/Minority Report/Matrix thread

    Originally posted by BKR View Post
    Flagging people ? Medical privacy laws will have to change, and who gets to decide who is dangerous ? How ?
    You want my honest opinion? AI. It's clear humans suck at manually entering/analyzing the data and drawing conclusions required to ensure there are proper interventions. Machine learning can do it, if taught properly. Neural networks can defeat the best human players at chess, it stands to reason they can help identify bad eggs who need to be in a hospital or jail. This guy had was giving off plenty of signal for years, but it was people that failed (as usual, PEBKAC). So the "system" isn't really formal or autonomous. It's best effort, YMMV. Medical privacy is not a luxury crazy, violent people cannot be afforded, given this trend.
    Last edited by W. Rabbit; 11/08/2017 7:47pm, .

    #2
    Originally posted by W. Rabbit View Post
    You want my honest opinion? AI. It's clear humans suck at manually entering/analyzing the data and drawing conclusions required to ensure there are proper interventions. Machine learning can do it, if taught properly. Neural networks can defeat the best human players at chess, it stands to reason they can help identify bad eggs who need to be in a hospital or jail.
    Since you bring up chess, let me just say that Watson Health is still mostly smoke and mirrors. There's not a lot of "there," there. Neural nets need significant training even in the best circumstances, and IBM isn't there, yet. I have it on good authority (I'm an IBM partner that works with Watson tech, and I have a good friend that also runs an entire business built on Watson Health). Watson relies on people like me for it's "go juice."
    Consider for a moment that there is no meme about brown-haired, brown-eyed step children.

    Comment


      #3
      Originally posted by W. Rabbit View Post
      You want my honest opinion? AI. It's clear humans suck at manually entering/analyzing the data and drawing conclusions required to ensure there are proper interventions. Machine learning can do it, if taught properly. Neural networks can defeat the best human players at chess, it stands to reason they can help identify bad eggs who need to be in a hospital or jail. This guy had was giving off plenty of signal for years, but it was people that failed (as usual, PEBKAC). So the "system" isn't really formal or autonomous. It's best effort, YMMV. Medical privacy is not a luxury crazy, violent people cannot be afforded, given this trend.
      No thanks...
      Falling for Judo since 1980

      "You are wrong. Why? Because you move like a pregnant yak and talk like a spazzing 'I train UFC' noob." -DCS

      "The best part of getting you worked up is your backpack full of irony and lies." -It Is Fake

      "Banning BKR is like kicking a Quokka. It's foolishness of the first order." - Raycetpfl

      Comment


        #4
        Originally posted by W. Rabbit View Post
        You want my honest opinion? AI. It's clear humans suck at manually entering/analyzing the data and drawing conclusions required to ensure there are proper interventions. Machine learning can do it, if taught properly. Neural networks can defeat the best human players at chess, it stands to reason they can help identify bad eggs who need to be in a hospital or jail. This guy had was giving off plenty of signal for years, but it was people that failed (as usual, PEBKAC). So the "system" isn't really formal or autonomous. It's best effort, YMMV. Medical privacy is not a luxury crazy, violent people cannot be afforded, given this trend.
        You mean like minority report. Yeaaah no

        Comment


          #5
          Originally posted by Kravbizarre View Post
          You mean like minority report. Yeaaah no
          Yeah, maybe. https://en.wikipedia.org/wiki/Future...ing_Technology
          Consider for a moment that there is no meme about brown-haired, brown-eyed step children.

          Comment


            #6
            Originally posted by BKR View Post
            No thanks...
            To be clear yes I want your honest opinion always I just disagree with letting ai like those sorts of decisions or do that sorts of analysis certainly not final decisions on anything.

            Such matters need to be adjudicated in front of a judge with Witnesses the ability to cross-examine Witnesses, present evidence Etc.
            Falling for Judo since 1980

            "You are wrong. Why? Because you move like a pregnant yak and talk like a spazzing 'I train UFC' noob." -DCS

            "The best part of getting you worked up is your backpack full of irony and lies." -It Is Fake

            "Banning BKR is like kicking a Quokka. It's foolishness of the first order." - Raycetpfl

            Comment


              #7
              Originally posted by submessenger View Post
              Neural nets need significant training even in the best circumstances, and IBM isn't there, yet.
              That statement is false as of October 19th, and IBM is way behind AlphaGo Zero.

              https://en.wikipedia.org/wiki/AlphaGo_Zero
              https://deepmind.com/blog/alphago-ze...rning-scratch/

              AlphaGo Zero is a version of DeepMind's Go software AlphaGo. AlphaGo's team published an article in the journal Nature on 19 October 2017, introducing AlphaGo Zero, a version created without using data from human games, and stronger than any previous version.[1] By playing games against itself, AlphaGo Zero surpassed the strength of AlphaGo Lee in three days by winning 100 games to 0, reached the level of AlphaGo Master in 21 days, and exceeded all the old versions in 40 days.[2]

              Training artificial intelligence (AI) without datasets derived from human experts has significant implications for the development of AI with superhuman skills because expert data is "often expensive, unreliable or simply unavailable."[3] Demis Hassabis, the co-founder and CEO of DeepMind, said that AlphaGo Zero was so powerful because it was "no longer constrained by the limits of human knowledge".[4] David Silver, one of the first authors of DeepMind's papers published in Nature on AlphaGo, said that it is possible to have generalised AI algorithms by removing the need to learn from humans.[5]

              Comment


                #8
                Originally posted by Kravbizarre View Post
                You mean like minority report. Yeaaah no
                No. That's science fiction. In MR, they rely on precognition to arrest and charge and jail people before their crime. The scare there is the lack of due process.

                Due process is impossible without due diligence. This is about putting investigators with names, so that the investigators know who to watch. There's so much data out there being used to push you products, but it could be used to give law enforcement a wind you are up to no good. If a lead turns out to be nothing, at least you checked it out.

                They already do this for all sorts of crime today, particularly drug and organized crime and espionage. They could use it to find potential mass murder threats. Now if the latest news turns out to be true, that Kelley was using live animals for target practice..that's something an investigator already tipped off by a computer might have found evidence of, if they knew where to look.

                I think people are way too paranoid in this country, and the internet promotes that more than anything in history, and the answer is not to get more paranoid by being afraid of networks that already today know what toothpaste you prefer, let alone whether or not you're probably planning a massacre. It's funny how people already trust them for so much, but trusting them to hunt dangerous mass-killers by sifting public records or reports, or even some private ones, is scary to them. You've seen too many movies.
                Last edited by W. Rabbit; 11/09/2017 7:08pm, .

                Comment


                  #9
                  Originally posted by W. Rabbit View Post
                  The scare there is the lack of due process.
                  The deeper scare is that the future is deterministic, with enough specificity to enable sufficiently advanced technology to exploit said knowledge. Would be great for weather forecasting, but there would no longer be a stock market. Gambling in general would go away. You think we have insurance problems, now? Just wait until AIG can determine your pre-pre-existing condition!

                  Alas, none of this has much to do with 2A, so I foresee a cull in the near future.

                  (edit) From http://www.bullshido.net/forums/show...71#post2956171
                  Consider for a moment that there is no meme about brown-haired, brown-eyed step children.

                  Comment


                    #10
                    Originally posted by submessenger View Post
                    The deeper scare is that the future is deterministic, with enough specificity to enable sufficiently advanced technology to exploit said knowledge. Would be great for weather forecasting, but there would no longer be a stock market. Gambling in general would go away. You think we have insurance problems, now? Just wait until AIG can determine your pre-pre-existing condition!

                    Alas, none of this has much to do with 2A, so I foresee a cull in the near future.
                    Even given the 2A/mental illness debate going on the legal changes between Obama and Trump administrations.?...and the clear association between mental health and gun massacres?

                    The Gun Control Megathread, the 26 dead in Texas Thread, and this one are all imtimately connected. If you deal with the core problem ("chaotic" massacres that leave a trail before they occur), you'd take away a lot of political will behind gun control proponents, whose strongest argument are these events.

                    See that's the problem with gun control advocates AND their counterparts. It's so black and white with no granularity, and when there is granularity, people bitch about it as discrimination. So instead of fixing complex problems with complex solutions, everyone keeps arguing over the law, and gets nowhere.

                    Trump's relaxing those rules, remember? People don't want their guns on register because 2A, etc, and they don't want computers snooping on them (even though they'll let just about any internet company do it). Google, Amazon, etc. take your info and sifts it all day long, but maybe it's time the government started taking an interest in using technology to flag bad apples before they strike, which of course might involve taking away their 2A rights, or even more.

                    I think discussion of identifying actual/probably threats, as opposed to arguing all day long about banning guns or not, is the exact problem (not) being discussed in the Megathread. That's why that thread will go on and on, because that debate is pointless until people start talking about mental health as the core issue, with the # of available guns a correlated variable.
                    Last edited by W. Rabbit; 11/09/2017 8:04pm, .

                    Comment


                      #11
                      This discussion has much broader implications than just 2A, as I'm sure you'll agree. When/if we can directly relate AI applications to the gun control debate, I'm sure Cassius will welcome them. In the interim, we can continue the discussion, here. Until then, it's just white noise to them, geeks talking about computers. Doesn't stop us talking, but it may serve to elevate the level of discussion.

                      Now, if you'll excuse me for a bit, I'm having an argument with Watson, right now. Quite literally, the fucker is choosing to forget several facts when I'm telling it to add one new fact to the mix. Annoying as all hell.
                      Consider for a moment that there is no meme about brown-haired, brown-eyed step children.

                      Comment


                        #12
                        So, here's how you fix it. You set up some temporary memory space. You record the facts into that temporary memory. You tell Big Blue your new fact. And, then you remind it of the other facts from temporary memory. WTF?

                        Back in a few to continue the discussion.
                        Consider for a moment that there is no meme about brown-haired, brown-eyed step children.

                        Comment


                          #13
                          Originally posted by submessenger View Post
                          This discussion has much broader implications than just 2A, as I'm sure you'll agree. When/if we can directly relate AI applications to the gun control debate, I'm sure Cassius will welcome them. In the interim, we can continue the discussion, here. Until then, it's just white noise to them, geeks talking about computers. Doesn't stop us talking, but it may serve to elevate the level of discussion.

                          Now, if you'll excuse me for a bit, I'm having an argument with Watson, right now. Quite literally, the fucker is choosing to forget several facts when I'm telling it to add one new fact to the mix. Annoying as all hell.
                          Facts can change. They aren't immutable. Like your "fact" about neural networks needing to be trained by humans, which turned out to be false. It flipped over time. We could graph it as a step function.



                          In fact, this ties right back to my point about stochastic signal theory and "random" mass shootings. Random signal theory is about the chaos of events and/or determinism in the time domain.

                          However, I'm willing to posit that there's always a signal in the noise before these events, that they aren't completely non-deterministic, and that humans are just terrible at finding the signal in what they perceive to be "noise" but are in fact, filterable. There is advanced discrete SIG/SIGINT math that describes this exact sort of thing (finding a tiny, tiny signal in a vast ocean of random signals).
                          Last edited by W. Rabbit; 11/09/2017 10:21pm, .

                          Comment


                            #14
                            It's also very clear that the rate of attacks is increasing, unless you believe the mass shooting blip of 1999-2017 will end up being an anomaly of mass shootings. It's likely just the start of it.

                            Laws and systems like NICS don't seem to solve the problem well enough, either. Why can't machines do a better job again? Humans make mistakes, we can't afford mistakes here.

                            Comment


                              #15
                              Originally posted by W. Rabbit View Post
                              Facts can change. They aren't immutable. Like your "fact" about neural networks needing to be trained by humans, which turned out to be false. It flipped over time. We could graph it as a step function.



                              In fact, this ties right back to my point about stochastic signal theory and "random" mass shootings. Random signal theory is about the chaos of events and/or determinism in the time domain.

                              However, I'm willing to posit that there's always a signal in the noise before these events, that they aren't completely non-deterministic, and that humans are just terrible at finding the signal in what they perceive to be "noise" but are in fact, filterable. There is advanced discrete SIG/SIGINT math that describes this exact sort of thing (finding a tiny, tiny signal in a vast ocean of random signals).
                              Super-secret Google shit is not available to the masses. I stand by my "nets need training," because that remains the status quo, until you can get me a TPU or 20.

                              (edit) even if you did get me one tonight, there would still be an implementation delay, even if I threw every resource I had at the thing.
                              (edit moar) the AGZ didn't just decide on its own to learn to play Go. AI remains dependent on meat.
                              Consider for a moment that there is no meme about brown-haired, brown-eyed step children.

                              Comment

                              Collapse

                              Edit this module to specify a template to display.

                              Working...
                              X