shadfurman

Makes, Projects and other general trouble.

SingularityMind: Manufacturing 

​Manufacturing efficiency is skyrocketing. Every improvement in materials and computing has a multiplicitive effect on manufacturing as every step in the process can see an improvements and there are usually MANY steps. Other steps can be combine or eliminated entirely. An increase in efficiency results in an increase in quality or a decrease in cost, you end up with more of the population being able to afford better and better products; raising the standards of living for everyone. 
The cellphone is am excellent example of increasing efficiency in manufacturing. When it started, people could yell to one another, if you were particularly boisterous, you could communicate a hundred yards at best, but quality decreased significantly over a distance. Somewhere along the line someone discovered/invented fire signals and networks of runners, writing was a huge improvement on this system. A few hundred years ago electricity was harnessed and the telegraph followed shortly. The telephone was invented, and you could talk to anyone with a copper wire running to their house, and copper wasn’t cheap, but the system improved for decades and is still used today. Radio was invented, combine with the telephone and vuala, if you wanted to carry around a brief case of electronics you could talk to anyone while within a few miles of the cell tower. The Internet came along, to prevent the United States from loosing communication in the event of a nuclear attack, but then the nerds in universities got a hold of it. Meanwhile, the cellphone could now fit in your hand… Or on a fugly holster on your belt, and it was cheaper. Then smaller and cheaper. Then smaller and cheaper. In the late 90s the average American could afford a cellphone and you could almost swallow one… If you really wanted too… Say you were a spy and it contained secret documents… Anyways, around this time cellphones were connected to the Internet and got a feature of digital cameras. A few years later, Apple upturned the tech world by releasing a cellphone that was little more than a smallish-large screen, not even a keyboard, but it did have multi-touch! The next year they released the app store allowing other developers to make programs to run on it, not only could a critical mass of the public afford an oxymoronically named smartphone, but most would actually want one (there were actually many before, though many forget). Since then smartphones have gotten better and cheaper. The side effect of Western gluttony of pocket sized connectivity and computing is that many people in 3rd world countries can afford a cellphone. Sure they’re much like the old bricks of a decade ago, any self respecting western teenager would scoff at you if you bought them one for Christmas, but with these ridiculously cheap devices many people all over the world who would never had access to modern convenience or the wealth of education available on the Internet make electronic transactions, surf the Internet, and study for school… See the good your greed is doing? Smartphones are now whittling their way into these poverty stricken economies where kids can study on khanacademy.com and learn calculus without all the overhead of Western public education. 
The point of that overly long paragraph? The smartphone that is lifting the world out of poverty was made possible by increases in efficiency of manufacturing, driven by the greed of western society and evil/capitalist business geniuses. It’s easy to forget, living in the relative comfort of 40 hour work weeks under the floundering economy of the richest countries in the world. The protection of a law a democratic-republic can provide, but people didn’t always have smartphones, some still don’t have food or water, so we’ve still got a ways to go. What’s the next steps in manufacturing? 
One of the next big steps is 3d printing, additive manufacturing, taking a raw material and turning it into a product in a factory that will fit on a desk. You could think of the first 3D printers as just regular paper printers, an art originally called desktop publishing. It’s easy to poo-poo 3D printing comparing it to desktop publishing because after the boom of home paper printers people just started taking their print jobs to Kinkos to have things printed, this is already happening with 3D printing. There are dozens of websites where you can upload a design and get your product in a week. The reason this won’t be the future, but it is the now, are because of poor ease-of-use and monofunction. The best 3D printers only print in a few different kinds of plastic, usually just for printing in a few colors. The number of things a person can print is quite limited to mechanical function and good metal printers are hundreds of thousands of of dollars as of 2016, but they will get cheaper. 
Future 3D printers will print electronics, combining hundreds, perhaps millions, of materials through very complicated on the fly chemistry. Of course, especially at first, these electronics won’t be able to print a cellphone as good as the one you have in your pocket, but it will be able to print a TV remote, a car part, maybe even a battery. Right now you can print a cellphone cover, a hinge for your cupboard or a specialized volume control knob for your stereo, so you can crank it to 11. 

Another area of 3D printing is organs. I don’t know the common term for this, but there is a guy who’s been walking around for over a decade with a 3D printed liver (or kidney, whichever is simpler, I forget, look it up, there’s a TED Talk on it.) For now I’m going to call it bio-3D printing. Not only will this completely eliminate the necessity of harvesting organs and waiting on a list hoping you strike the lottery, eventually you’ll be able to 3D print an organ with your own DNA or an entire artificial organism. You know in movies when they “clone” a person and it comes out as a fully formed adult? This kind of sci-fi bs bugged the crap out of me! Well… THAT will be possible. My estimate? 60 years. 
About 30 years ago some smart chap named Chuck invented the first patented 3D printer, of course there were others before that, but Chuck patented his and made a business of it. You could argue 3D printing started much earlier when a certain DaVinci painted layers of resin on top of each other to make a lamp shade, and additive manufacturing has been around in one form or other, I’m wild guessing, since before recorded history. 

Then Chucks patent expired and dozens, or maybe even hundreds of bored Makers set to work on developing and sometimes even selling versions of their own. Now for a few hundred dollars, you too can be the proud owner of a finicky, plastic wasting 3D printer. For a few thousand you can get one that prints candy, full color, sandstone, or just works most of the time. For a few hundred thousand you can buy one that will melt titanium dust with a high power laser (sharks sold separately)… No joke, Elon Musk uses them to build his SpaceX Draco thrusters. The pie is high and rising fast, there’s a 3D printer on the International Space Station orbiting earth RIGHT NOW. Peter Diamandis plans to use 3D printers to build robot parts to mine asteroids, FROM the asteroids. Almost any kind of material you can imagine being layered to make something has been done and everything made, cookie dough? Frosting? Cement into houses? Cars? Guns? Yes, yes, yes, yes, it’s all there. Growing and growing cheaper. 
You think printing whatever you want sounds like an amazing idea that will fill the landfills with unnecessary crap and failed test prints (they look like the flying spaghetti monster crashed at Roswell)? Additive manufacturing? Imagine subtractive recycling. It just so happens that the type of plastic used in most consumer 3D printers is the same type used in milk jugs. Recycling them is as easy as melting them into a long wire like filament that is used in these printers, and there are projects to do just that trick (turns out it can be a bit finicky). In the future, you won’t have to buy anything, as long as you have some trash to throw in the 3D printer, a la Back to the Future 2 style. 
It will become cheaper to have a desktop 3D printer like device that can make anything than to have a dedicated plant and process that will make one thing en mass because the 3D printer will be able to print the recycler and the printer. Anyone with trash consisting of the necessary raw materials will be able to print their own 3D printer that can make anything. You won’t even have to make your own 3D designs or pay someone for them, you’ll just tell your computer (which will be so small it can be in anything, including your head) what you want and the computer will make a better design than anything a human possibly could. “things” will be free. 

Advertisements

SingularityMind: AI

I was checking my facts for a piece on AI and saw Elon Musk had posted this guys article on Twitter. Way better and more informed than I’m willing to do, so… Here you go.

The AI Revolution: The Road to Superintelligence
By Tim Urban

There are a few points I disagree with here. First, computer/human integration will become tighter and more acceptable. Many people think people won’t want computer chips in their brain, that might not be the method of integration, obviously the method that the public accepts will be the method that will be commercialized and become cheap, but if you’re skeptical, imagine telling someone in the 60s about the prevalence of plastic surgery today. You might say, well it’s not the majority of the population, and I’d point out not everyone is obsessed with their body image, everyone is obsessed with the Internet. Combine that with a cheap surgery that you probably won’t even have to be put under for, and may even be an injection, (unlike most elective surgeries today) we will see mass adoption. What’s the point of this setup? There will not be an AI, there will be many AIs and we will BE the AI and the AI will be us.
Safety research is important, there will be accidents, but the research is being done, there will not be a revolt of the machines… So worry about the zombie apocalypse instead.

AI will seem intelligent in 10 years. Most people don’t talk about it, but stop and think about it, imagine Apples SIRI in 10 years combine with Watson (IBMs computer that beat the top Jeopardy players) in 10 years. You’re going to be able to ask your watch any information and it will be able to reply with the exact and correct information, it will have its own personality that may evolve, and hold extensive conversations. YouTube videos of people putting two phones together and see what conversations they have will be a thing. People will empathize with their phones like on the movie Her. Apple SIRI, Amazon Echo, OK Google, Microsoft Quortana. The makers of Siri are making a new one (Apple bought Siri) called VIV.

For computers to take over research, computers do not have to be as intelligent as a person. Many of the human brain neurons are devoted to emotions, hand eye coordination, and left over from evolution, etc… Sure it works great, most people are happy with their brain, but it’s not exactly ideally designed for  logic and scientific research. In fact, all a computer needs to take over the research is being able to perform the tasks, simulations or robotics, a basic understanding of the problems and goals which is being done now, and intelligent path finding. This type of setup is already being used for network penetration tests (test hacking websites). Computer guided development and evolution of computer intelligence will take off in 20 years, I believe this is the knee of the curve for commercial intelligence and not truly human level intelligent computers. We don’t want a human computer, we just want a smart computer thats reasonably good and faking an understanding of emotion.

SingularityMind: The beginning

Lets start with some clarification on Moores law. According to webopedia:
“The observation made in 1965 by Gordon Moore, co-founder of Intel, that the number of transistors per square inch on integrated circuits had doubled every year since the integrated circuit was invented. Moore predicted that this trend would continue for the foreseeable future. In subsequent years, the pace slowed down a bit, but data density has doubled approximately every 18 months, and this is the current definition of Moore’s Law, which Moore himself has blessed. Most experts, including Moore himself, expect Moore’s Law to hold for at least another two decades.”
http://www.webopedia.com/TERM/M/Moores_Law.html

mooreslaw

Since the invention of the transistor, transistors have been getting smaller and cheaper. This trend seems to be continuing even though the number of transistors in chips sold to consumers has slowed. (It occurred to me that even though the number of transistors in a persons main computing device may be smaller on average, the actual total number of transistors per person has certainly increased incredibly, I don’t have exact numbers on this yet, but here is a back of the napkin calculation on how many transistors are sold per year per person.) With the uptick in tablet and cellphones there has been a downtick of desktops and laptops, tablets and cellphones have fewer transistors than desktop and laptop processors, the average consumer doesn’t seem to need the complexity for everyday computing tasks. Even still, the cost and size of a cellphone processor is decreasing, it won’t be long before cellphone processors are faster than todays laptops. The graph above shows the unbroken trend. Ray Kurzweil claims this trend can be graphed as far back as Charles Babbages difference engine of the 1820s.

I mark the beginning of computing in the 1940s with the first electronic calculators such as the Harvard Mark 1 and Eniac. Taking up a decent sized room, prone to errors, and less capable than a modern calculator these computers were not what we think of today when we say computer. There were no displays, no games to be played, they just calculated in 1s and 0s, it was the beginning. The first transistor, invented in 1948, was no smaller than it’s predecessor the vacuum tube, or its cousin the mechanical relay, and it was very expensive. A transistor can be used as a signal amplifier, or as in computing a switch. Imagine if you had to switches on the same wire, you would have to turn on the first switch AND the second switch to make the connection. If you had two switches side by side, connected at the top to a wire and connected at the bottom, you could turn on one OR the other to complete that connection. This AND/OR type connections, called Boolean logic, is the basis of all modern computation. The silicone transistor technology allowed it to shrink, and a few years later it appeared in radios and got cheaper, research into integrated circuits began. Integrated circuits, more than one transistor on a single part, began development in the lab and appeared in the 1960s. The Apple 1 was built by Steve Wozniak in 1976 and Apple Computer quickly rose to hysterical fame, the Apple II added color. What else appeared in the 1970s? The cell phone. All through the 80’s and 90’s computers were rife with development, competition and reducing costs.

In the early 1990’s IBM combine basic computing with a cellphone and a monochrome display to create the first “smartphone”. The end of the 90’s saw popularity of smartphones in Japan. The Blackberry, aka “crackberry” in the early 2000s. The nut buster of them all came in 2007 with the release of the Apple iPhone, its primary competitor close on its heals in 2008. Whats next? A smart watch? Smart glasses? Computer-brain implants? Yup, they’re already here. Computers went from warehouses, to rooms, to closets, to desks, to laps, to wrists and headwear. In a year this post will look dated, but predicting the future in 10 years will be nearly impossible.

If you’re skeptical about computer to brain and brain to brain connections, we’ve come a long way in the last 20 years.

http://www.wired.com/2015/07/science-can-learn-wiring-monkey-brains-together/

If you’d argue people would never have surgery just to check their facebook status, eventually they won’t have too.

https://en.m.wikipedia.org/wiki/Nanomedicine

If you can’t imagine how silicone chips will shrink much smaller than the nano-sized parts they already are, they don’t necesarily have to to stay within the spirit of moores law, as long as they are getting faster and cheaper. Eventually we will run to the end of silicone technology, good news, crude versions of quantum computing are already here.

http://www.dwavesys.com/quantum-computing

PREDICTION:

If you follow all these curves we’ve painted, computers will continue to get cheaper, faster, smaller, and most relevently, closer to the body, to inside the body. It should be acknowledged that there will be significant push back from people not wanting to put things into their bodies. Just as with the first vaccines, grown men fainted at the thought of intentionally sticking a needle into their skin and injecting a fluid into their body. There is a natural and healthy, evolutionary revoltion, to the idea of opening our bodies up, to putting things into them. But this revolution didn’t stop the adoption of vaccines, nor increasingly imaginative piercings and tattoos, or plastic surgery, and it won’t stop computers from entering the body.

Computing will get smaller and cheaper. Wearable computing such and watches, glasses and even clothes will become more popular in the next 10 years. As interface technology, such as voice recognition and artificial intelligence, becomes better the need for a larger general purpose computing device such as a cellphone or tablet will decrease. People have gotten used to having access to and sharing more and more personal information, this trend will continue, often to the obcene and detriment of the technology, but largely to the benefit to the society. Artificial intelligence will become more practical and useful, automatically recognizing things from various cameras and microphones and attempting to relay relevant information to you in a useful way, this technology won’t be very good for some years but eventually will be so good and to essentially give us extra senses. Basic implantable body sensors and authentication devices will become more popular. More brain implants to treat medical conditions will pass FDA approval. Today its RFID chips, tomorrow it will be blood and heartrate monitoring. Realtime sensing of blood sugar levels. These sensors will be able to extrapolate heart rate and hormones in the blood to know if you’re in danger and possibly alert authorities (please turn off while skydiving). It will be able to accurately guess your emotional state and post it to Facebook, often with the embaressing humor autocorrect gives us today. Autonomous cars will be on the road and traffic accidents will start dropping off.

Over the next 20 years, the sensors and implants in the body will increase and get smaller and smarter. Brain implants will start to be a “thing”. AI will be a know it all pet, following you around, ready to tell you anything you want to know at a moments notice a fully capable digital data assitant if you want it. AI will SEEM to you like it is intelligent. Characters in computer games will act like real people… or unreal people. Any accurate news will be written by massive data collating servers, watching every blog, twitter account, pinterest, vine, and whatever the new thing is, able to tell in less than a second when a world changing event is happening… or even know BEFORE it happens.  Half the cars on the road will be electric and autonomous, traffic accidents will be vary rare.

In 40 years… computers will completely disappear. No one will ever pull out a phone during dinner or their laptop in a meeting. This will be the future people imagined in old 60s sci-fi. People will walk up to a curb, a car will pull up at that exact second, they will get in and take them to where they want to go without ever being told where they were going. If you want to communicate with another person, you’ll be able to do that, any way you want to. Leave them a virtual post-it note that they see virtually. Send them a voice message without opening your mouth. Or even send them a thought, for those really hard to explain conversations, they will open it and understand instantly what you mean. Kids will attend schools from home, gathering together in virtual spaces to work on projects together, and school will be completely unrecognizable. Kids won’t take tests, or sit in rows staring at the sunshine outside. Every work a kid does will be evaluated for level of understanding. AI will be practically smarter than people, being able to “understand” every nuance of language and intellect, hold lengthy conversations on philosophy, join with your brain instantly giving you super human intellect, or play you your favorite completely original movie that no one else has ever seen before, that you happened to be exactly in the mood for. Want to go to Mars? Well you COULD go, but you don’t have too. You can take an ultra-realistic stroll on Mars, or inhabit the body of a robot and walk around doing real science. You won’t even notice the delay in the radio signal, the robot will make choices for you and convince your brain thats what you wanted to to, and when you make a choice, the program will convince your brain their was no delay between when you decided to turn left and when the robot turned left. With this same technology, you can be a cat for the day, or even inhabit the brain of a phsycopath. Science will change so incredibly fast, no one will bother studying it, until you need it, and then you’ll learn it super fast. The “next thing” will be “so last week” before you’ve even heard of it… and it will have sold billions of them. Its easy to look at this future and think of flaws. What if this… or what if that… well you’re wrong, people and AI will have thought of that, I promise, and it will be fixed. There is nothing you can think of that won’t have been thought of by then. Pollution will cease as computers evenly distribute wastes for industrial uses and perfectly market prices. Global warming won’t be a problem and any damage done will be fixed. Poor people around the globe may be poor, but there will be little disease and nearly everyone will have enough food and a comfortable bed to sleep in at night. It will be so cheap to provide everyone basic necessities that it just happens as a spillover of other industry. It won’t even be socialism, going through a government wouldn’t be efficient, a computer sees a competitive hole, calculates the benefit to the company at a $0.0001 profit margin and food is distributed, or a free hotel is built, without a person to over see any of the process, it just happens.

 

Yes these futures are complete extrapolation, and there’s no way these visions are in any way accurate, except in spirit. This is what the future will be like. Regulation, unforseen scientific discoveries and cultural trends will guide the river of the future to one of an infinite number of possible unimaginable futures, of which the one I discribed it possible. However, I believe the future I discribe is very likely to be close to the real one, based on extrapolating the changing trends and a healthy dose of imagination. What say you?

 

SingularityMind

I’m going to do a series of apologist posts in defense of the optimistic view of the future called “The Singularity” capitalized because while it’s not the name of a location, it’s the name of a point in time. Popularized, and I think coined by, the inventor-entrepreneur Ray Kurzweil (you’ve heard of the electric keyboard I presume, that was him). One of the other things Kurzweil is famous for is taking computer related technologies and plotting relevant metrics on a graph and extrapolating the curve. Like Moores law (a projection from the brain of Gordon Moore, one of the founders of Intel, stating the number of transistors on a chip per cost would double every 18 months) most computer related technologies feed off this growth in computers. From the Internet to genetics sequencing is growing at an exponential rate. After viewing hundreds of curves from varying technologies Kurzweil hypothesized an optimistic utopian future. The Singularity refers to the point in time when the intelligence of a computer outpaces the intelligence of a human. I’ve heard many projections, but I recall Kurzweil estimating this time around 2040, less than 30 years away, within most of our lifetimes. At this point, as computers take over all research, ALL technology becomes computer related and we’ll see an unprecedented explosion of information incomprehensible but to those insanely creative enough to imagine the probable possibilities.

Now I want to be clear, this won’t be like the Johnny Depp movie Transcendence (surely inspired by Kurzweils book titled Transcendent Man) about a man who invents a computer smarter than a human, but with no intelligence to inhabit it, his technology to download a brain having gross side effects. On his death bed they decide to give it a try, and he almost changes the world. (I won’t spoil the end… Though I kinda already did.) The number of lone inventors the likes of Edison and Tesla in today’s world are few, though along with Dean Kamen and Elon Musk, Ray Kurzweil is surely one of them. Though even Edison and Tesla had their teams to help them, they were the driving spirit. Most technology, transistors, computers, the Internet, cell phones, etc, are built discovery by discovery, advancements by advancement, by competing teams of researchers and engineers. The Singularity will happen gradually, few will probably notice when it happens and there will be no line to draw in the sand to state, “this is when it happened”. As a case study, the Siri feature of the iPhone was a fantastic achievement in software engineering, many hailed it as the long awaited natural language processing AI we’d been promised since the 60s. Now, less than a half decade later, we curse it when the 1 in 10 times it sends your mom an explicit text instead of telling her you’re coming over for dinner. We’ll be cursing this future for some time to come, as it cures disease, the planet and war. Just as we do today despite decreased death from wars and disease as we look in awe at Hubble images on our tablets.

Also, the technology doesn’t actually need to “think”, one of the arguments against The Singularity is that computers will never be able to think, it merely needs to appear to think and give the same results as thinking. I am convinced we will have “thinking” computers in less than 20 years, but this thinking will only be in appearance, able to answer almost any question we have, do our homework, remember to feed the cat and record ones life story through conversation and ordering it into a sequential entertaining biography for posterity… then make it into a movie. In 100 years… Well…  That’s gonna be the next few… Dozen… Few dozen posts.

(Insufficient disclaimer: I did not fact or spellcheck anything in this post… Don’t hold your breath for future posts.)

How NOT to remove the infrared filter from a cellphone camera.

So I had this brilliant idea to turn an old phone into a wifi baby monitor. Now, of course all cellphone cameras have infrared filters on them right? Gotta remove that to make it night vision capable. After prying at the lens fixture with a jewelers screw driver for 45 mins I finally got it off and was presented with… a bare image sensor… No infrared filter. Of course I royally scratched the coils for the auto focus, and it appears the microscopic bearings may have a specific order as they appear to be color coded… And those fell out. Auto focus is trashed. Didn’t get pictures as I was planning to take them after I got the non-existent filter off. I guess I might use it for a microscope…

Some GREAT info on Aluminum/Carbon fiber fuel cells

Some GREAT info on Aluminum/Carbon fiber fuel cells

Stefan has done some great experiments with Aluminum and Carbon fiber fuel cells and posted his results on the overunity.com forums. He also posted videos; links in his post.

NOT a battery!

Well, been doing some reading and the Aluminum Carbon (or in my case copper) isn’t acting as a battery, its a fuel cell. The Aluminum is oxidizing releasing electrons, this is why it can work in open air and why it produces hydrogen in water. I’m still a little fuzzy on the exact chemical reaction; my chemistry is pretty lacking. 

Carbon is non-corrosive and fairly nonreactive which is why it makes a good electrode, but can have a pretty high resistance which decreases the amperage. Winding a good no corrosive conductor (stainless steal or silver) through your carbon electrode will help decrease resistance. You can also use stainless steal or silver, but stainless steal will eventually corrode and have to be replaced and silver is expensive.

Yay for electro-chemical fuel cells. I figure a model with aluminum cans will provide a sufficient fuel source.