Showing posts with label Technology and Buddhism. Show all posts
Showing posts with label Technology and Buddhism. Show all posts

Monday, September 02, 2013

On project proposals, start-ups, and heck, life


I often think that a useful attitude to getting things done, Buddhist though it may not necessarily be in its effects (that's for another post, but it involves right speech and right livelihood), is to approach any task with the determination and confidence of Vo Nguyen Giap.  He's 102 now, Wikipedia asserts. Moreover, as you can see in the interview embedded,  his language is simple enough that if you have a even a smattering of high school French, you can understand him.




I bring up Gen. Giap for another reason about project proposals, life, the universe, and everything.

The odds are against you.

The odds are against you because you lack experience, you're getting older, because things work until they don't because you're never fighting the last battle, because history never repeats itself, because there's a zillion others in competition, because capitalism is inherently unstable and the markets for your products are decreasing because of increasing concentration of wealth, because time, money, equipment, and staff is limited, etc. 

The odds are against you and there's simply no point in worrying about it.  Rather you have to find yourself where you are, and act properly, and effectively where you are. 

And even monkeys fall from trees.

You will make mistakes. They will cost. The two most common reasons for failure by any organization are poor management and under-capitalization.  (Do you need a guru to tell you that?)  That is, people are living their lives in the work world as they do in the rest of their world. (I just realized that we need a part of speech that is like a fractal property, rather than a simile or metaphor, but I digress. A simile comes close enough for the purposes herein.)

All of the above is to say that the First Noble Truth permeates the work world, including the start-up technology work world.  And by realizing that one can start to dwell and to flourish amidst this world has the First Noble Truth and start to transcend it. 

And the only way to keep going is to realize that that you're either going to learn a lot and fail along the way (the most likely outcome for many new projects) or you're going to win by a combination of getting lucky and, through attrition, surmounted as many obstacles as were in your path and then some. Kind of like the approach that the North Vietnamese army took, but naturally their conduct was in the same category, morally, as those of any other war-making entities. They killed many people. As did the US.  But if you're too attached to the outcome of your project, start-up, etc. etc.,  you probably will not understand and learn from your mistakes, and probably get attached to the obstacles in your path.

So,  with right livelihood, and with an ethical approach, there's nothing wrong with borrowing the motivation for the morale of Gen. Giap, and not the nasty violent bits of war.

And one other thing to point out, which I'm pretty sure permeated the NVA's morale is this: You're pretty lucky already if you're involved in some high tech project or start-up or what-not.  Most folks don't get that far. And, hopefully, you've been exposed to real work so there's an understanding that what you're doing is far, far more easy on you than most people working in the world. 

Now I don't know if the guru of Silicon Valley had similar advice, but if not, I have no idea what he's peddling.

Will meditating help you "get ahead" in such an environment permeated by First Noble Truth chaos and the fog of economic war?  If you're asking that question in the hopes of getting an answer then you're too attached to the outcome.  Your meditation practice is of immense value in the conduct of your life for many reasons, but if you're trying to get something out of it,  please, if you are in the Zen school, consult somebody competent in a lineage and discuss your practice with that person. Please.




Tuesday, July 10, 2012

I don't work in Silicon Valley, but if I did...

I'd find the idea of hard-partying 20-30 something year olds indeed a ridiculous stereotype.  I'd find it a ridiculous stereotype anyway a) because there's lots of people my age over there... that is they're "grownups," and b) every engineer in the course of their work knows or should try to know how they're going to change the world...unless you're an engineer in the defense industry (where it's kinda sorta obvious how you're going to change the world).



Sunday, June 17, 2012

Technology and our use of it as a reflection of who we are...

Because various computers in my house have aged, and because there's still things you can't readily do on a PC that you can do on a Mac after nearly 3 decades of Macs, I am the new recipient of an Apple iMac, so if this post isn't coming out the way to which you're accustomed to reading, let's just say I still don't have a few things figured out yet.  But they're not the important things to me.

There are things a PC does better today, but that's mostly because Microsoft still does things to privilege use of their machines. These are things like access of the Microsoft webmail client in Safari - though the "mail" application on the Mac handles it without incident.

But PC versus Mac isn't the whole point of this post, but it illustrates a recurring theme of this post, which is a continuation of my continuing musings on this article that appeared in February of last year about how Steve Jobs was able to capture the initiative in the consumer electronics market.


Other companies fail to do things because they've overlooked potential openings or are cutting corners to save money; under Jobs, however, every spurned opportunity is a conscious, measured statement. It's why the pundits who give Apple products poor reviews for not including industry-standard components -- for instance, the iMac's lack of a floppy drive -- just aren't getting it: Apple products are as defined by what they're missing as much as by what they contain.
To understand why, one has to remember that Jobs spent much of the 1970s at the Los Altos Zen Center (alongside then-and-current Gov. Jerry Brown) and later studied extensively under the late Zen roshi Kobun Chino Otogawa -- whom he designated as the official "spiritual advisor" for NeXT, the company he founded after being ejected as Apple's CEO in 1986, and who served as officiant when he wed his wife Laurene in 1991.
Jobs's immersion in Zen and passion for design almost certainly exposed him to the concept of ma, a central pillar of traditional Japanese aesthetics. Like many idioms relating to the intimate aspects of how a culture sees the world, it's nearly impossible to accurately explain -- it's variously translated as "void," "space" or "interval" -- but it essentially describes how emptiness interacts with form, and how absence shapes substance. If someone were to ask you what makes a ring a meaningful object -- the circle of metal it consists of, or the emptiness that that metal encompasses? -- and you were to respond "both," you've gotten as close to ma as the clumsy instrument of English allows.
While Jobs has never invoked the term in public -- one of the aspects of his genius is the ability to keep even his most esoteric assertions in the realm of the instantly accessible -- ma is at the core of the Jobsian way. And Jobs' single-minded adherence to this idiosyncratically Japanese principle is, ironically, what has allowed Apple to compete with and beat Japan's technology titans -- most notably the company that for the past four decades dominated the world of consumer electronics: Sony.

The author of the blog and Microsoft likely still don't get it. 

They don't get that a successful technology product is adapted to the way people are, and is useful to them where they are.  And it's not overly hostile to the environment.  It's deeper than that, too but I won't go further there.  What I will say is that so much problems are created by not taking these kinds of things into account in all aspects of product design.  The PC and the Mac are metaphors for how our society has approached things: the Mac, a "socialist" (or if that's too politically loaded for you "communitarian") version of the Way Things Could Be is highly integrated, (mostly) much easier to use than a PC, and works better.   A PC is made by outsourcing, and throws more power and Gigaflops at problems that result in mostly unappealing compromises for performance. 

On the Mac, if you open an ftp site, you know what happens? Folders appear on the screen.  Like they should. Try that with Internet Explorer, if you've never opened an ftp site.  To download and install Open Office on a Mac, it takes maybe 15 minutes with a decent wireless internet connection. Have you tried installing a new version of Microsoft Office lately?  Oh, and one is free and the other is what...$150 or $300 or something like that?

So it's a metaphor for how we should structure and unstructured, and interact with each other, which like many things is probably applicable in aspects of our life and families and communities beyond Stuff You Do With a Computer.


Tuesday, April 17, 2012

Awareness and...?

I may have more to say yet on life, death, and the internet of things. But this post is about awareness. And maybe a bit about the internet of things.  I came across an article about awareness and its continuum a few days ago but was too engaged at the time to blog about it here.


One problem is that the word has more than one meaning. Trying to plumb the nature of self-awareness or self-consciousness leads down one infamous rabbit hole. But what if the subject is simply the difference in brain activity between being conscious and being unconscious?

 I think we Zen folk actually focus more on the issue of not so much the nature of awareness but in our practice of being aware. 


In studies using anesthesia, the paralytic effects of drugs used during surgery were blocked from one forearm, and then attempts were made to communicate with the patient. Dr. Alkire wrote, “Patients under general anesthesia can sometimes carry on a conversation using hand signals, but postoperatively, they deny ever being awake. Thus, retrospective oblivion is no proof of unconsciousness.”
The recent research by Dr. Scheinin and Dr. Langsjo and colleagues, including Dr. Alkire, looked for proof of consciousness. The researchers used brain scans in combination with two drugs, propofol, which helped cause the death of Michael Jackson, and another anesthetic drug, the many-syllabled dexmedetomidine.
The standard measure of unconsciousness is that a subject or patient does not respond to commands. By that standard, when a subject responds, he’s conscious. What makes dexmedetomidine an ideal drug is that people who are completely under can be brought back from oblivion by gentle shaking and loud speaking, even if they are still on the regular dose of the anesthetic.
In Dr. Scheinin’s study, when unconscious subjects on this drug were told to open their eyes, they responded. Then most of them drifted back into apparent unconsciousness, without their brain’s neocortex turning on. Only the brainstem, the thalamus and one part of the cortex were active.
The subjects under propofol were not waked up, but as the drug was withdrawn, the pattern of their awakening fit well with the other data.
Questions remain. What level of consciousness exists without the neocortex? Does this mean the subjects understood what was happening with more primitive brain regions?

 It's may be even more complicated than that, since the above is detailing a particular response to a verbal command.   The issues raised here - and the issue of memory and its determinant of awareness means that in terms of science and technology, we simply do not understand yet the ways in which our own biology is related yet to awareness and really memory.  We're just beginning to get ideas on these fronts, as well as the idea of what it means for the awareness of self as self to exist biologically.   It's partly why I cringe when I see people going woo about the internet of things, technology, etc. and putting some kind of metaphysical spin on them.   They misuse science's purpose, or misunderstand it, or don't care. 

Now don't get me wrong - technology is certainly influencing our biology in terms of our diets, our modes of transportation, and such.  The ubiquity of computing, memory, and  communications devices is no doubt contributing to an atrophy of our own cognitive and expressive skills.  But that's not anywhere near saying that my iPhone is primitively aware of anything.  Hell, I don't know what else can be aware of anything, strictly speaking.


Friday, March 23, 2012

Off on business...but a couple of pending ideas...

I'm off to Korea on business, and don't know when I'll get to post or how much in the next week or two, but I thought I might point out a couple of ideas in embryonic stage right now:

1. I saw on Algernon's blog a post on pink slime and vegetarianism. As a non-vegetarian, as a person who is fascinated by blurring borders and deconstruction of categories (including that of life and death), I have some points with which to respond in this on-going conversation.

2. Regarding blurring boundaries of life and death, awareness and non-awareness, I've a bit more to say about the Internet of Things and universal awareness.  Mainly though, it's that the design of the Internet of Things must be done without regard for the metaphysical.  Sorry folks, but we see through a glass darkly as one guy wrote, but we engineers must design towards the sensible and observable and measurable. 

I'll be back.

Wednesday, March 21, 2012

Technology, Buddhism and Creativity

 






"Internet of things?" I say to myself.  I've done that.

Yeah, I have.

No need to go into details, but yes there are real flesh and blood Buddhists developing real technology now. I generally don't talk about it on this blog because I prefer to make sure there's a pretty high wall between what I do here and what is attributed to those to whom it should be professionally, including myself, colleagues, employer, etc. However, it is possible for me to talk as an individual generally about technological developments and it is a matter of public record that I've been involved in the development of technologies enabling the wireless internet.  So as long as I don't get too specific, I'm OK here from an ethical point of view.  And while I don't do applications for M2M (that is, "machine to machine"  which is how "Internet of Things" is more commonly and compactly referenced), as a real technologist I have a few perspectives on that article.

First, let's take this part:

Perhaps when the abstract idea of a “web of life” becomes physical—when our plants, houses, boats and bodies are interconnected through technology—interconnectedness will feel more real to us. Perhaps we will better understand the impact of our behaviors when visualized aggregated data shows us the consequences on air quality of taking the bike instead of the car to work. But will this knowledge of our connection to all other things make us better people? Or will we just fuel our addiction to stimulation, becoming experience junkies who use increasingly advanced devices to post updates, tweets and check-ins and win badges, rewards and social status? What happens when our plants start tweeting that they’re thirsty and our cars check themselves in at a parking lot by the beach? What was supposed to be enlightening becomes performance art.

The Internet of Things will produce data sets like we’ve never seen before, but that doesn't necessarily mean we will have more meaningful products. So the question becomes, how can we design connected objects with meaning and mechanics to make people engage in better behavior? 

Reality check:  

  • M2M devices and applications are going to be made in those nasty places where cell phones already are made. It will be the case because people with economic and political power can cause it to be so for their own benefit.  
  • Applications in automotive, agriculture, health and energy are already being developed with specific objectives in mind.  Those objectives happen to be making more stuff and services to serve people to make other people money.  It's up to the end users, who may aggregate for good ends, to produce good ends from these interconnections.
  • You won't make people engage in better behavior through devices themselves just as you won't make people better chefs by designing better steel for knives.  People with better knives can become just as well better killers.  Technologies are a set of tools. Don't forget that.

Matt Rolandson says, “The first step is to put meaning on the agenda in the product development process, as emotional and philosophical intention, by encouraging designers with ideas about how to manage intention and awareness. A lot of what is developed today uses the triggers of fear or social stress..."


Reality check:

I could go on about how products are designed today, the "Agile Development" fad/trend, etc. Instead, I'll go a bit meta on this and simply point out that this has been done for years in industry, though many (rightfully) disagree as to what "meaning" means here, and what is "right" and "ethical."  But for anyone who doubts what my point here, I'd suggest they read The Effective Executive by Peter Drucker.  And those of us who are in Buddhism and technology are endeavoring to practice it as we make our project plans, reports, software modules, and systems.  We are endeavoring to benefit all beings when we determine what sorts of technology and development we pursue. 

[Vincent] Horn sees a future when the use of bio- and neuro-feedback gets more advanced and thereby can tell us when our minds start to wander, when our attention goes away. A big part of Buddhist thinking is being reminded to be present, and a number of technologies are being developed toward that end. Vince sees a huge potential to automate certain activities in order to free up energy to explore new vistas of the mind.

Reality Check
I see that and other things.  I just can't tell you about it at all other than to ACK what Vincent wrote above.  Suffice it to say, he ain't seen nothin' yet.


As the Internet of Things is being developed, there is a question of whether the movement toward an interconnected society will be hindered by monetization


Finding sustainable models of development will be done, because the market will demand it. 

One other point I'd like to make though, which is not considered at all by those in that article. It has to do with creativity and technological development.

Engineers design stuff and create groundbreaking research for a couple of reasons: First and foremost, it's fun to create, or to lead others to create.  It's enjoyable. It's like mountain climbing or going on an adventure to develop something that no one ever did before; you are pretty much seeing what has never been seen before in the history of humanity. Folks like me (including me)  had to prove theorems that were previously unknown to make the stuff work today that works today.  We're driven to do it, just as an artist is driven to create art.  Even if we don't make oodles of money in Silicon Valley (though we're not uncomfortable.)

It's why I'm skeptical when I see futurist stuff (I'm looking at you Ray Kurzeil).  Many technologists (including the author) have been around the block on these things.  When I see someone using a smartphone, I have to think to realize that my inventions made that scene possible.  The reason is, because me, like hordes of other technologists, are driven by the question best that morphed into the of a Cartoon Network ripoff of Mythbusters; that is, "Dude, what would happen if...?" To say we should design a product that "reinforc[es] a positive identi[t]y for" end users would kill the creativity.  Or as Rilke is reported to put it, if my creative demons are exorcised, then so will be my creative angels.  We make tools; we make amoral tools.  Maybe app designers can find good ways to use them, but you can't design a knife that won't cut you if used wrongly.

 Moreover, that scene of the smartphone user wasn't made by a single technologist or even one single group of technologists.  There was, simply for starters, all beings involved in supply chains, including those workers in those nasty  places I mentioned earlier. Futurists tend to be blissfully unaware that the cost of these things in human life and experience needs to be acknowledged and addressed.  Since they're not involved directly in doing  and don't see the doing, it's going to be significantly more difficult for them to be aware of it.  Here's a hint, though: what was the photo at that adorns the top of this blog portraying?


Finally, in regard to futurism and Kurzweil, I'll quote a section of the Wikipedia article on him; these bits are consonant with my view of the subject:



Kurzweil's ideas have generated much criticism within the scientific community and in the media. There are philosophical arguments over whether a machine can "think" (see Philosophy of artificial intelligence). Mitch Kapor, the founder of Lotus Development Corporation, has called the notion of a technological singularity "intelligent design for the IQ 140 people...This proposition that we're heading to this point at which everything is going to be just unimaginably different—it's fundamentally, in my view, driven by a religious impulse. And all of the frantic arm-waving can't obscure that fact for me."[50]
VR pioneer Jaron Lanier has been one of the strongest critics of Kurzweil’s ideas, describing them as “cybernetic totalism”, and has outlined his views on the culture surrounding Kurzweil’s predictions in an essay for Edge.org entitled One Half of a Manifesto.[51]
Pulitzer Prize winner Douglas Hofstadter, author of Gödel, Escher, Bach, has said of Kurzweil's and Hans Moravec's books: "It’s as if you took a lot of very good food and some dog excrement and blended it all up so that you can't possibly figure out what's good or bad. It's an intimate mixture of rubbish and good ideas, and it's very hard to disentangle the two, because these are smart people; they're not stupid."[52]



Of course Hofstadter's point could be generalized significantly: Much of what everyone does (including myself)  is a mixture of very good food and dog excrement blended together. That's why folks invented process improvement - or as the Japanese put it,  改善処理 (kaizen shori). Or, as Patti Smith put it, "The transformation of waste is perhaps the oldest preoccupation of man..."

We have to question how we're questioning as to figure out what to improve.

  That about says it all.



Tuesday, January 31, 2012

Scientific and Technological Literacy and Western Buddhists

I make no secret of the fact that I've got a Ph.D. in Electrical Engineering.    I am grateful to the teachers I've had who've been able to impart a way of thinking about the physical world in such a way that I am able to produce technology that can be used.

Because of my background and experience, and because of the state of others' background and experience, I'm more than a bit shocked at the level of scientific and technological ignorance that I find in the on-line writings of quite a few Western Buddhists.1

If politics is the art of the possible, as Bismarck said, engineering might be said to be the study and practice of the technology of the possible. In order to be a competent engineer, it's useful at some point, in some area at least to be a competent scientist.

I really don't feel like naming names here, because it's just not too fruitful to do so, but, instead, let me present a series of bullet points.:

  • The fact that many people are asserting that climate change is not anthropgenic is an indication of political factors in play in the discussion of technological issues.
  • These political aspects of climate change are similar to political aspects that have driven the development (or exploitation) of the commons for centuries.  For example,  read this book on the Great Flood of 1927.  It is no more a "denial of an equivalent truth" of the physics of climate change than it is to deny an "equivalent truth" contrary to the fact that water in rivers flow because of gravity.
  • Technological utopia or technological Armageddon are two sides of the same coin.  One of the "folk theorems" in engineering communities might be expressed compactly as "The Law of the Conservation of Badness."  Everything has a cost.  The trick is to understand what the cost is, and to use that cost to actually help things along, as far as our survival and alleviation of misery is concerned.
  • The idea that "If we only understand X, then everything will be better..." is a form of grasping.  Sometimes a grasping for knowledge and understanding must be done to achieve a certain benefit - as in the understanding of some aspect of mathematics necessary for the design of a radio receiver, for example.  But that understanding simply is limited, and doesn't involve the understanding or considerations of the inevitable side-effects that the actual building and use of that radio receive might have, which have the capability to be both benevolent and malicious. 
  • Having said all of the above, despite our ignorance of many fields of physics, biology, etc. the Scientific Method is still the best tool we have for understanding and predicting phenomena having to do with the physical world.  About other aspects beyond the measurable, science is silent.  If you acquire a knowledge of quantum physics,  it might help you to design semiconductor-based devices, but studying how to cook with garlic might give you more insight into the nature of your consciousness.

Chances are that with only greed, hatred and ignorance driving things, we'll be driven to extinction.  I hope not. As an engineer,  I have no interest therefore in how technology can help propagate Buddhism (per se).  I'm more interested in how the Way can find its way into the technology of the possible.








_____________________________________________ 
1. Then again, I'm also appalled at the state of some so-called writers of science who don't have much idea about what engineering is all about. But I'm used to that. Many scientists are not very good engineers, just as many engineers might be good at one area of science, but tend to miss the big picture when it comes to applying science (or not being able to) in other areas.

Monday, January 30, 2012

C. 3000 Posts on: The Impermanence and Irrelevance of Authoritative Narratives

The Blogger thing tells me that this is the 3000th post - which, with a profile post written probably means it's the 2,999th post.  I'm not sure why that's particularly relevant, but quite a few bloggers do post such kind of milestones.  Very few celebrate their 567th blog post. But with millions of blogs out there, you can't rule it out entirely.

As I surveyed the info-sphere this morning to jog my memory into what I was going to write about, I came across a few articles, as I often do.  Two articles of the "authoritative business genre" really spoke to me this morning (here, and here).  Actually they didn't; they didn't speak to me metaphorically; neither did they speak to me literally.  In fact, they whelmed with with their evident irrelevance.  Woe is us - which I think is the right way to say it, but I'm not entirely sure.

One of the articles deals with the "Yin-Yang of Corporate Innovation" or something like that.  The other deals with Wired UK's "smart list" of "people who will change the world."  Let me present additional data to make your day.

Ever hear of the "magazine cover indicator?"  It's what investment market players call a "contrary indicator." That is, in a mainstream (not specialized) business publication, there's a concept well understood by "those in the know" that's being propagated to a mass market of information consumers, that's made people money. Like the famous cab-driver that gave the millionaire stock tips on the eve of the Crash of '29, that's an indicator to cash out, because the "last buyers" in the market are being told of what the Big Play is, and after the last buyers, there are no more buyers.  As you can see from that last link, sometimes the cover's pretty uncanny in its ability to predict the future by reversing the "plaintext narrative" of the cover.

Now considering the Wired UK's article...did you ever notice that the very name of Wired is a magazine cover indicator? I did, a bit more than 12 years ago. The name of the magazine first arose in connection with the wired internet. And if you regularly read this blog you can well understand why I view that as a contrary indicator come true, but even if you don't, the term "Dot Com Bubble" should remind you.  I do read Wired from time to time; it does tell me of things and people I might not normally be aware of.  But whether it's an issue of Wired or Fast Company (is that still around?) or the various industry fora I attend from time to time, my first instinct is to deconstruct the main narrative because that's exactly what the smarter minds than that possessed by me in the industry tend to do.   I've also had the benefit of knowing people who swore by those stories in mainstream publications, only to be found woefully overtaken by events.

Regarding the Times article on the "innovation" while the author of the article throws around a lot of buzzwords ("Open innovation" is a recent current favorite, now fading), the author - and his sources, in particular, it seems, John Kao - fail to express what that might actually mean in today's world, because it sure doesn't mean what's stated in the "plaintext meaning" of the article.

Many companies these days are thinking how to challenge Google or Apple.  And some of them are asking the wrong questions, because in part they don't "get" what how these companies got where they are and they don't at all see how to actually succeed in their endeavors despite what these companies are doing now.  Rather instead they - like the authors of that Times article - are taking away models of innovation from them as though somehow they were fixed narratives.  Well you heard it here first: Google and Apple will each stumble big, and in different ways.    And so will you and I if we continue a cookie-cutter prescription for the way things will be.

There's that exchange from the movie The Matrix whose "capping phrase" has entered our discourse:

Spoon boy: Do not try and bend the spoon. That's impossible. Instead only try to realize the truth.
Neo: What truth?
Spoon boy: There is no spoon.
Neo: There is no spoon?
Spoon boy: Then you'll see that it is not the spoon that bends, it is only yourself.

There are no hard and fast formulas to the way one lives one's life, carries out one's endeavors and enterprises, etc. There's only what we can do with ourselves.  You can get a good view of this through a mindfulness practice - after a while you realize that your preconceived notions are just in your head, and sooner or later, what you thought was "impossible" is in fact possible.

I'll (almost)  finish this post with something I did yesterday - a 書道 of "cloud."   I started doing this a while back with no talent, no knowledge of innate ability and no experience, like everyone who starts anything for the first time.  This one is not my worst ...hopefully I'll get better.   But if I had clung to the thought that I'd never be able to do this at all, I wouldn't have been able to do even this at all.  So there it is...




Now let me finish this post with a final thought for you: Does the narrative of "there is no narrative" apply to this blog post or not?

The degree that it does or does not I don't think can be known at this time.

Tuesday, November 22, 2011

Does Siri have Buddha nature?

One of the key aspects of my livelihood involves, from time to time, the acquisition of consumer technology both to see what I've (in part) wrought, and to see what the general state of the art is. And so it is I am in possession of an iPhone 4s. Siri "lives" amongst a bunch of computers somewhere - it will not disclose its "location" (if indeed it has a single such location.)  It sometimes gives seemingly witty answers to questions, and without a wireless connection is at a loss to give help. Also, sometimes Siri gets too many requests and requests to be left alone for a while. There have been reports that people feel more attached to their iPhone 4s with Siri than previous iPhones; apparently the voice interface gives some kind of "humanity" to the device. I have to say that I see this, though the genius of Siri is fundamentally to break the interface, clumsy for many, that a device in the form factor of the phone inevitably presents if limited only to tactile inputs. (Too many reviews of the device have focused on the "gee, that's a threat to Google" aspect and have completely ignored this aspect, which is actually far more important.)

Siri is admittedly pretty crude for a human simulacrum. But, as Kevin Drum notes, computers becoming as smart as people isn't that far away.

In 1950, true AI would look like a joke. A computer with a trillionth the processing power of the human brain is just a pile of vacuum tubes. In 1970, even though computers are 1000x faster, it's still a joke. In 1990 it's still a joke. In 2010 it's still a joke. In 2024, it's still a joke. A tenth of a human brain is about the processing power of a housecat. It's interesting, but no threat to actual humans.

So: joke, joke, joke, joke, joke. Then, suddenly, in the space of six years, we have computers with the processing power of a human brain. Kaboom.

Here's the point: technological progress has been exactly the same for the entire 80-year period. But in the early years, although the relative progress was high, the absolute progress was minute. Moving from a billionth to a trillionth is invisible on a human scale. So computers progressed from ballistics to accounting to word processing to speech recognition, and sure, it was all impressive, but at no point did it seem like we were actually making any serious progress toward true AI. And yet, we were.

Assuming that Moore's Law doesn't break down, this is how AI is going to happen. At some point, we're going to go from 10% of a human brain to 100% of a human brain, and it's going to seem like it came from nowhere. But it didn't. It will have taken 80 years, but only the final few years will really be visible. As inventions go, video games and iPhones may not seem as important as radios and air conditioners, but don't be fooled. As milestones, they're more important. Never make the mistake of thinking that just because the growing intelligence of computers has been largely invisible up to now that it hasn't happened. It has.
 In fact, Drum is pessimistic: the fact that computers  - millions of them, in principle - can be networked means that  "computers" becoming as smart as people is already somewhat near reality.  That is the computational power of many computers can be leveraged already to produce results that would be impossible for any idiot savant to solve in a lifetime.

Does that imply, in any way, sentience?

The answer, at least from a scientific, and/or phenomenological point of view, is still going to be "can't say." (Read Douglas Hofstadter and call me in the morning1.)  And in a sense, it doesn't really matter because our lives are still conditioned as they are; we are replete with the senses and volition and consciousness, and such, and the fact that there are really smart computing machines out there doesn't diminish that.

Let's put environmental and social issues aside momentarily. (The damn things are quite inefficient relative to us organic computers, and such technology inevitably creates further class and social divisions.) Rather than ponder whether an intelligent computing agency could approach human sentience, it's more appropriate to consider what we are and can do, and to perhaps have a bit greater humility because of our diminished place in the ecosystem of existence, encroached upon by advances in evolutionary biology, animal sociology as well as artificial intelligence.

That even with such smart machines that there will still be things beyond their capability (for now) shouldn't be cause for a "human of the gaps" view of ourselves; we should however, focus on all the stuff we can do in this space and time.

______
1. Note: the talk linked to above, well, I disagree with quite a bit of it, actually. Especially, if Hofstadter's representing Kurzweil corrrectly, the latter completely doesn't quite get what the genome actually is, and in particular the "information" containable in the genome isn't actually the totality of all information present in a human being. They'd have done well to have Richard Dawkins at that talk, or better, somebody who actually understands genetics than I do. And my grasp of genetics as information theory isn't all that grand.  And of course, like Hofstadter, - who is way too polite, I think,  I balk at the notion of an environment that can sustain an arbitrarily large amount of computing power, as well as a whole host of other Kurzweil bunk.  But you probably knew that.

Thursday, September 30, 2010

I am not worthy!


As I've mentioned before, I have the iPhone app "Zenbrush" on my iPhone.  It's a useful thing for me to learn Kanji/Hànzì (漢字)  and to be able to write them.  It's a fascinating brain exercise: To be able to recognize Kanji (sorry dear, I learned Japanese first, and am only beginning Chinese!) is one thing, and to be able to write Kanji is an entirely different thing!  The two areas of the brain that are involved in learning these things aren't identical.  Furthermore, to be able to write Kanji one has to be able to cultivate one's "spatial memory" to an extent that is much greater than one uses for writing Western alphabets.

Anyhow,  just take a look at some of the uploaded images on the Twitter site #Zenbrush. I am a total piker, a rank beginner by comparison.  Oh well, it's not the point. I'm learning.  It'll take a while to learn.  And as I recall Suzuki roshi once wrote that people like me might actually get good at 書道 because we aren't trained to write perfectly.

That's comforting.

Tuesday, December 29, 2009

Perhaps they'd like to talk to a real science/engeineering guy...

The Buddhist Geeks, did that nice interview with Genpo Dennis Merzel, which enabled Gniz to consider Merzel's responses here. It turns out the Geeks are also planning on having a "Dharma 2.0" conference in Boulder CO in 2010.

I tweeted them. I presume that means I tweeted Vince Horn, since the agenda seems to cover some of what I cover here:

* Buddhism & Technology – The information age has radically altered almost every dimension of our personal lives, our society, and economy. What impact will it have on the Buddhist tradition, and are there ways we can consciously adopt technologies to benefit Buddhist communities?

* Cutting-edge Buddhist Practices – Many Buddhist teachers are being informed directly by other pre-existing traditions of personal exploration and change. The result is that all sorts of innovate and interesting hybrid practices are emerging in the Buddhist world. Are these practices as radical as their creators claim? Or are there examples of teachers who are simply watering down the teachings of the Buddha, re-packaging them in fancy garb, and charging gobs of money for them? We’ll explore these questions, as well engage in some of the more promising of these hybrid practices.

* Buddhism & Science – Scientific explorations into the benefits of Buddhist-style meditation have exploded in the past several years. What is the implication for the Buddhist tradition, and for the wider populous?

* The Future of Buddhism in the West – Underlying all of the previous topics is a question about where we are now, and where we are heading tomorrow. With such an array of complex factors influencing the development of Buddhism today, how can we engage with the future in a way that honors the rapidly changing nature of things, and the need to act quickly at times, with the deep-rooted need to stay present with what is?

I figured, with all I have recently said about "Intelligent" "Design," (especially the Wilber kind), "Biocentrism," and other fads, it would be useful to have a real engineer (i.e., and applied scientist) with over 50 patents (or is it 60?) address some Buddhists to talk about What Science Really Is, and How Buddhism Relates to Science.

Or I could talk about Buddhism and Technology. As a guy whose work is actually in phones I use, I know something about the latter, and as a Buddhist, well, I know something about the former. But then if I went there I would go all Nagarjuna/nullity on the whole issue, because ultimately it's how you behave, it's how you practice with the other sentient beings, not things that increase our footprint and are hard to recycle.

Of course, I could also talk about "The Future of Buddhism in the West," especially given my position that whatever I've seen from the Big Names from Buddhism in the West, there simply has not been anyone like a Lin Chi, a Yun-Men, a Dogen or a Hakuin.

I could also talk about Buddhism and Western Ethics or What I Have to Do Ethically in My Job versus What Some Buddhists Do. With respect to some Buddhists, those that give away the Dharma, I come up short (well duh, I labor in a capitalist enterprise!) With others, though...

Anyway, if they reply maybe it would be useful to start a dialogue there. Worst comes to worst, Boulder's not such a bad place...