A List of Computing Predictions

I, like many technophiles, maintain an interest in a lot of hardware and software developments. Recently, however, I've noted a proportion of technological developments which I am assured will become dead-ends. With that in mind, I'd like to present a list of my own computing predictions.

- There will be such a drastic increase in computing power over the next five years that the software industry will be unable to keep up.

With the advent of multi-core processors, personal computing no longer strives to do one thing faster than before, but many things at once. Symmetric multiprocessing and multi-threaded applications have existed for a long time in technical computing and supercomputing, but these developments have infrequently been applied to personal computer operating systems. With the quad-core processor released into the commercial market, and octo-core processors on the way, PC hardware finally has the chance to have useful multiprocessing capabilities.

That is, if programmers could apply their skills to multi-threaded applications. We're seeing problems with this already, with computer games having increasingly protracted development periods and operating systems blooming out of control with bloat, but the multi-core processor has been largely untouched in terms of consumer applications. Creating support for multithreading through software is difficult and will likely increase the protracted periods of development for games developers.

Another area which presents potential problems is that of increasing hardware miniaturisation. As technology gets smaller, people are logically going to try to apply that technology to smaller applications, including portable music players, mobile phones and handheld consoles. While I'm highly optimistic about having more power in the palm of my hand, and greatly enjoy using my current smartphone, increasing complexity in these devices has rarely been taken well. We already have massive problems with the Luddites going, "Why can't a phone just be a phone?" (I strongly object to your opinions, and disagree vehemently to your objections on principle, BTW), and people finding it difficult to navigate interfaces on portable devices. While this is improving, with clearer interfaces and bigger screens than before, including those on hybrid slider and touch-screen phones, there's a long way to go before these will become appreciated in the same way as a PC. The problem is on the software side, not the hardware side, and that's something that's going to have to improve.

- Cloud computing will not catch on within the next ten years, and will remain a niche application.

Ah, cloud computing. I've heard so much about this, with applications like Google Docs presenting office software over the internet. It's completely overrated. You know, it reminds me of something I've read about, something that was beginning to die out about the time I was born; a concept called time-sharing.

You see, back in the 1950s and 1960s, when electronic computers really started to come into their own, computers were hugely expensive devices, only within the financial reach of scientists, universities, businesses, governments and military organisations. They were crude, often accepting their entry through Hollerith punch cards and front-panel switches, and later through mechanical teletypes, which were loud, crude machines vaguely representative of a typewriter. The problem was that most of these control mechanisms only allowed one person to use the computer at once, and so, the idea of time-sharing was devised. With time-sharing operating systems, a single computer could be connected to by several terminals at once, and the computer would divide processing power to users through controls programmed into their applications. This persisted throughout the 1960s and 1970s, used with the increasingly powerful mainframes and minicomputers, to the point where hundreds of people could be supported on some of these computers at a time.

Then, during the late 1970s and 1980s, there was a development which drastically changed the face of computing. The development of the personal computer meant that people would no longer have to use a massive centralised minicomputer or mainframe for many of their applications, and time-sharing began to die out as the PC became more powerful. The personal computer was developed to get people away from the concept of a massive centralised computing facility.

And therein lies my objections to cloud computing. Right now, the computer I'm typing on has more power than the fastest 1980s supercomputer. My computer at home would be in the TOP500 list all the way to the mid-1990s. Why then, when we have computers that can do so much, would we willingly move ourselves metaphorically back in time to the idea of an centralised application server? I mean, it's not even like most of the consumer-targeted programs are any faster than the ones we have on our home computers. Indeed, because many of them are programmed in Java, and because our internet connections are generally so woefully inadequate for using a fully-featured office suite, these applications tend to be slower!

Now, I can strictly understand the idea of internet archiving, although I still think that it would be more logical to carry around a USB stick, but I do not understand why you'd want to move most of your workload to a slow, inadequate office suite or imaging program, and therefore, I must conclude that cloud computing will not find success outside of niche applications, and that it will not even catch on for those within ten years. People are making too much of a technology which was actually rendered obsolete more than twenty years ago.

- There will be no input technology within the next ten years that will displace the keyboard and mouse.

We've all seen the success of the Wii, using its strictly technically inferior but still groundbreaking motion controls, and we've seen massive success for touch-screen phones, despite the notable inadequacies of the iPhone and many of its competitors. With all this buzz around these new input methods, it would be easy to presume that we'll have new input devices which will displace the ones we're all using right now.

I'm not so convinced.

You see, people have been predicting virtual reality and new input methods for years, and yet, devices developed several decades ago are still riding strong. The mouse was first developed in the 1960s, and the keyboard can trace its past back to the mid-1800s, via visual display terminals and mechanical teleprinters. The fact remains that there is no technology currently faster for entering text data than the keyboard. Specialist stenographic keyboards work more quickly, but they still operate on many of the same principles as a typewriter's keyboard. The mouse as well has many advantages which are hard to ignore. It has the sort of sensitivity, accuracy and precision which motion controls and touch screens would kill for, were they personified.

I have mobile devices with both a QWERTY-style keypad and an on-screen touch-sensitive keyboard. When it comes to entering data in quickly, the keypad will completely destroy the touch screen in a speed test, and that's with a more precise stylus-based resistive touch screen as well. I'd absolutely loathe to try typing in an entire review with the iPhone, as I've actually done with my Nokia E71.

There are other reasons why I feel that touch screens aren't going to displace the keyboard. Using a touch screen with your finger or thumb feels even worse than using a chiclet-style keyboard, of the type most derided when they were presented with the IBM PCjr or ZX Spectrum. There are reasons why people still buy typewriters for hard-copy writing. There are reasons why people spend over $100 to buy twenty-year-old IBM Model M keyboards. It's because of the superior tactile feel of these devices and the audible response that the keys make when they're successfully depressed, that tactile feel which is almost completely eliminated when you try to engage a touch screen with your finger.

I think, once again, that people are missing the big picture, and that's why I'm predicting that I'll still be considered normal for using my keyboard on my computer in 2020.

- Social networking is a fad and its use shall have sharply declined in three years' time.

And finally, we move on to my most controversial issue. I don't like social networking. Not one bit - I've even considered writing an essay entitled, "Social Networking Considered Harmful". You see, I reckon that it's a fad, just like all of those other fads that I've grown up with. When I speak to people at college, I don't find many who actually want to engage with technology on a level any greater than the internet and office software. I don't find many that even want to use Photoshop or who admit to using it to aid fictional writing. Perhaps I'm talking to the wrong people, but these don't seem like people that are particularly interested in computers, and that makes me inclined to believe that they're not going to continue to use computers in the same way they do now.

The problem is that internet communication is often devoid of any real meaning. The limitations of text in e-mails and electronic messages remove most of the emotion of a message, which is perfect for business e-mails and acceptable in personal e-mails, but not so adequate when it comes to expressing yourself on a social level. For that reason, when you look at a Bebo or MySpace page, it generally looks something like a GeoCities page, circa 1998. Clashing colours, poorly-chosen backgrounds and hideous spelling and grammar lend the impression of something that's been utterly hacked together. There's a reason why the web-pages of sites like Google and even the W3M, maintainers of the World Wide Web standard, stick to minimalist designs. Well-designed corporate websites stick to clean designs. Social networking pages, however, do not.

And that's even before we get into the actual content of the pages. It strikes me as quite frightening that this is the impression that some people actually want to create of themselves. That these pages are checked by companies for background information is even more frightening. When your pages don't exactly give the impression that you are even fully literate, let alone a potentially intelligent and creative employee, you are well and truly fornicated up the rectal cavity.

As somebody who's never signed up to a social networking service, it's very difficult to understand those snippets of insider information that I hear from the news, often distorted as the newscasters fail to understand the true nature of the technology. However, I still find it impossible to understand why there are people who spend their time virtually befriending people they intend to have no contact with on the internet at all, even though I spend my time writing reviews, articles and technological rants for people I've never met. Maybe there's some sort of leap of faith that has to be made, but I'm still convinced that social networking will be a phenomenon on life support within three years, just like the "dot-com bubble" burst.

I understood all of that and gree with it too.
Multi-threading is there the future for processing lies.
Cloud computing? They did a massive feature of it on the news about how its going to change our lives, my parents were wacthing and i told them there and then it was bullshit. For thigns like Steam fair enough, you can access your saves from any computer, but i wouldnt go any further than that. The interent is a tempermental thing and i would rather use a usb stick if i wanted to take some files with me.
I do use facebook to talk to my family in New Zealand and Devon, but thats about all. I know everybody in my friends lsit.

These all seem like fairly reasonable predictions. I'm not so sure about social networking being just a fad. It's been around for a while, and I think it's here to stay. Only time will tell.

Also, I'd like to say that a huge cyberterrorism event will take down a large portion of america's communication infrastructure, and possibly even utilities like power, water, transportation.

IPv6 currently is riddled full of security holes at the moment. I don't even know why newer operating systems have IPv6 on their system, we'll still be tunneling IPv4 over IPv6 connections for a long time to come.

Wow, this is, I think, the biggest wall of text I've ever seen on the Escapist.

A wall of text is a bad thing, when someone doesn't use paragraphs. The OP is not a wall of text.

On the whole, I agree with RAK and fullmetalangel's points. Although I think that eventually we will find an optimum amount of cores for a processor, as at some point there will simply be too many to manage.

I'm not sure social networking sites will be entirely dead. They may morph and change, as they have done (remember TheGlobe.com?), but people are fairly vapid and egotistical, and there is no better place to have your ego stroked with meaningless platitudes than TardSpace.

Hey, look, I have a thousand friends!
Hey, look, you don't know any of them, and you're a douche!

As for me, Facebook has become a way to maintain some minimal level of contact with people as we age and move further apart in this world, but to what end, I'm not sure. I also use it for a five minute fix of the shittiest rpgs you can find. Mafia Wars, anyone?

Cloud computing? Well, I use foxmarks so I have access to my bookmarks on just about any computer. That's it. Whoop dee fucking doo.

Also, very well written, OP, even if I am getting in on this thread a little late.

Steelfists:

Wow, this is, I think, the biggest wall of text I've ever seen on the Escapist.

A wall of text is a bad thing, when someone doesn't use paragraphs. The OP is not a wall of text.

Obviously someone missed this.

That was a nice read, your well-trimmed hedge of text. I have yet to even look at a Facebook or other social networking site. I don't plan on changing that habit. The thing is, if someone is the kind of person that wants me to make an account on such a website, they are probably horribly annoying and therefore I will never get to know them, so they never ask.

You're right about touchscreens. They're not all that fun to use, either.

Come to think of it, my friend had an older CRT monitor for his computer. This was years ago, and he'd had it for years before that. It seems that if it wasn't too cost prohibitive, a tech like that would have taken off. I could see it helpful in some applications (obviously, as coffee shops and the like use it), but apparently the functionality improvement was insufficient to warrant an upgrade in most cases.

Wouldn't mind a decent flat panel touchscreen. They said it was pretty good for playing Age of Empires back in the day.

Cloud computing, as a concept, makes at least some sense in certain applications. The average business (from small to fairly large) will have no reason to even bother with such nonsense, but the largest corporations and the more dispersed smaller companies can definitely see a benefit. One of the biggest hurdles such entities face is designing a system by which they can easily communicate and transfer data between sites. Currently, the companies that have the spare capital will often invest in a data center and use VPN's to connect the outlying sites to the central location. Less affluent companies instead rely on less convenient (and less expensive) approaches such as transferring data via e-mail, snail-mailing database dumps and the like.

Besides, given how willing businesses tend to be to jump on bandwagons even when the wagon isn't going anywhere they need to be, the only real hurdle is cost. If the price point is low enough per user, all it takes is a half-assed salesman to convince a client that it's a service they ought to use.

I don't imagine that social networking will die within the next few years. I think it'll take until around 2015 for it to fade. Twitter, however, I predict to fade before then.

I also imagine that while the keyboard and mouse will still be in use, the touch screen will be widely available as an auxiliary input device by about 2012. The keyboard is useful and compact enough to last, but new technological advances could make touch-screen laptops with a physical (rather than on-screen) keyboard affordable and marketable.

I'm entirely too lazy to read all of the first post, let alone the subsequent ones, but I have to put in on the idea of simultaneous processing:

As cool as it sounds, there is most definitely an upper limit and we're just about there, for 2 reasons.

1)Processors cannot go much faster due to physical restrictions on the material they're made of, so unless we devise some new super-self-cooling composite in the near future, processor speeds themselves are not going to improve much.

2)We do not know how to do simultaneous threads/processes efficiently. It's simply not feasible in modern computing. In any program, certain things must be sequential. It's simply not debatable. Some things certainly able to be executed simultaneously, but I'd venture so far as to say most of any given program will likely need to be sequentially executed, or involve waiting for input.

Because of the above 2 points, processing is almost at the physical limit. There may be some massive breakthrough in the near future to solve them, but it does not seem likely.

As a disclaimer, all of this is based on info ~8 months old and ICBF to see if it still holds true. I'd be very surprised if it was not still at least mostly accurate though.

Agayek:
I'm entirely too lazy to read all of the first post, let alone the subsequent ones, but I have to put in on the idea of simultaneous processing:

As cool as it sounds, there is most definitely an upper limit and we're just about there, for 2 reasons.

1)Processors cannot go much faster due to physical restrictions on the material they're made of, so unless we devise some new super-self-cooling composite in the near future, processor speeds themselves are not going to improve much.

Enter diamond.

Anyone remember back when Bill Gates said that computers would never exceed megabytes in terms of memory and processing power?

That's the same level of short sightedness a lot of people have nowadays in the computing industry. I wouldn't let it bother you.

Horticulture:

Agayek:
I'm entirely too lazy to read all of the first post, let alone the subsequent ones, but I have to put in on the idea of simultaneous processing:

As cool as it sounds, there is most definitely an upper limit and we're just about there, for 2 reasons.

1)Processors cannot go much faster due to physical restrictions on the material they're made of, so unless we devise some new super-self-cooling composite in the near future, processor speeds themselves are not going to improve much.

Enter diamond.

Or carbon nanotubes.

I like some of these predictions, like the staying power of the mouse and keyboard. Alot of hooplah has been generated by things like Project Natal, and the Wii, and even Microsofts multitouch tabletop computers. But at the end of the day, people will go with whats the easiest to use and not what is the coolest or most gimmicky. That being said, I feel you are sorely mistaken on a few points.

Technology-implemented soocial networking isn't a fad. People have been using the internet for some version of social networking since the web was first introduced. That's what the first WAN (HAWAIINET) was primarily used for. Later, when the internet became more widespread, its main purpose was still messageboards and basic IM applications. Its current incarnations in the form of facebook, twitter, and the aging myspace may fade, but the general concept is not going anywhere.

As for the technological limitations of processors, at some point a physical limit is reached. Moore's Law has been slowing down for a few years now, because manufacturers are running into problems like the transistor size approaching the triple digits in terms of number of molecules. Once you get small enough things like electron-generated friction become very difficult to overcome. Throwing around terms like "diamond" and "carbon nanotubes" is all well and good, but practically speaking you can't overcome problems like the physical size of molecules.

Finally, as for cloud computing, saying that it "will remain a niche application" in the next 10 years is simply naive. Cloud computing isn't CURRENTLY a niche application. An incredible number of services use cloud computing, being it Folding@home on the technical side or any-multiplayer-game-ever on the more popular side. What do you think is happening when you play WoW? Most of the interactions and computation happens server-side, not client.

Predicting future tech is like trying to create a 10 day weather forecast when all you have is a weather vane. In general, the best we can do is wait and see.

scifikayaker:
As for the technological limitations of processors, at some point a physical limit is reached. Moore's Law has been slowing down for a few years now, because manufacturers are running into problems like the transistor size approaching the triple digits in terms of number of molecules. Once you get small enough things like electron-generated friction become very difficult to overcome. Throwing around terms like "diamond" and "carbon nanotubes" is all well and good, but practically speaking you can't overcome problems like the physical size of molecules.

Oh, I certainly realise that much. I understand the interactions between atoms, and why there will be a point where the interactions lead to too much unpredictability to be used as processors. Designs such as diamond-based or carbon-nanotube-based processors simply extend the time we have left until we reach those limits.

scifikayaker:
Finally, as for cloud computing, saying that it "will remain a niche application" in the next 10 years is simply naive. Cloud computing isn't CURRENTLY a niche application. An incredible number of services use cloud computing, being it Folding@home on the technical side or any-multiplayer-game-ever on the more popular side. What do you think is happening when you play WoW? Most of the interactions and computation happens server-side, not client.

Graphics rendering, physics calculations and all of the other most gruelling tasks when playing a computer game are all performed client-side. There's a reason why you don't need anywhere near as much computing power per person on the server side than the client side. It's hardly like the server is generating the graphics for every player (and you need a hell of a lot of floating-point grunt to generate current three-dimensional graphics). Meanwhile, Folding@home is a niche application in its own right, a scientific application which doesn't factor into most people's lives - although I do use it myself.

The idea of most businesses for the use of cloud computing isn't the Folding@home model either. They're all thinking about people using their computers as terminals and using centralised computers for applications - see OnLive, Google Docs, et cetera for references.

RAKtheUndead:
[quote="scifikayaker" post="18.86737.2315032"]see OnLive, Google Docs, et cetera for references.

Yeah, actually how I got to this thread was by clicking through some OnLive links. I've been trying to decide if it will in fact be the gaming revolution that it claims to be. It is definitely very promising.

It seems that people don't know the fact that there is a limit to how small it is possible to make silicon chips.
id est: We are not far from said point, and when we reach that hurdle a number of things may happen:
1. The focus shifts entirely to creating software to get as much as is possible out of the ultimate computer
2. We have another major breakthrough similar to the desktop pc (some have suggested quantum computer or organic computer)
3. The computer industry stifles and whithers away (very unlikely)

P.S. sorry if this has already been brought up, I am too tired to read through all the posts.

RAKtheUndead:

I think, once again, that people are missing the big picture, and that's why I'm predicting that I'll still be considered normal for using my keyboard on my computer in 2020.

This is something I like because I have a biological analogy. The qwerty (god it's fun to spell that out) keyboard is like the eye in that it has an overcomplicated design and could be made significantly more effective and simpler. However the intermediate step will initially take us away from the desired improvement and will be inferior to the keyboard. Thus any business (or animal in the eye case) will be at a significant disadvantage and would not survive in natural selection with the inferior model. So we would be unable to change this ever excluding some occurance of a massive change in status quo.

What about chaogate? Now that's an exciting development in electronics. Even though it's mostly in theory at the moment.

RAKtheUndead:
- There will be such a drastic increase in computing power over the next five years that the software industry will be unable to keep up.

It is a bit difficult to go into this at length, but software can keep up if it becomes less imperative in nature. C++ is rubbish and people should stop using it. Java has better concurrency support, but is naff. Erlang does very interesting things in the telephony sector, like being able to swap out code units whilst an OS is running (i.e. zero downtime during software updates - with no system restarts) and does so by not letting you reassign variables. Functional Programming languages parallelise well (in principle), but their compilers tend to lag behind the field due to the innovation in their form. People use C (or rather GCC) because it is so well established that it has been optimised to get the most out of most single-processor PCs. The Glasgow Haskell Compiler shows some promise, but then every time I look at the functional programming scene there is some "hot new language" doing something ever so slightly different.

It seems as if there are two kinds of parallelism of interest. Large-scale and small-scale. The latter is best served by parallel functional programming languages (which by their nature don't care much about order of execution); you could also use something like OpenCL to make use of the parallel processor in your PC - your GPU. Array processing languages like APL are of interest too here, where you apply a transform to a collection of data rather than imperatively loop over an item one-at-a-time (as if the sequence mattered, generally it doesn't provided that one operation has nothing to do with the next). Large-scale parallelism is best handled by extremely lightweight threads (as in Erlang), coroutines, sensible division of labor by a multitasking OS (i.e. put the DVD encoding on your other core rather than context-switch threads like crazy) and network (and intranet) distributed processing (which has enormous latency issues, but as Folding 'at' Home has proven does get useful work done).

So, I think software will keep up with hardware, provided that people adopt different languages and OSes get smart and distributed.

- Cloud computing will not catch on within the next ten years, and will remain a niche application.

It is definitely over-hyped, but then every new technology is. That is the nature of marketing. Hot-desking and the ubiquity and utility of the iPhone legitimises this concept. People claiming that they can "get by fine" with a USB stick obviously don't work with centralised databases via a corporate intranet. I agree that it is similar to time-sharing, but it qualifies as a new technology because the cloud supports wireless access and consequently computing devices that are so small that you always have them with you. If, like me, you view data as the most important aspect of computing then it becomes reasonable to compromise on the kind of low-latency high-throughput feedback one gets from a proper workstation that has all of the data it needs to crunch held locally on its hard-disc or only as distant as the departmental server for the utility of being able to check your company's Wiki whilst waiting for a bus.

So, I think Cloud computing is over-hyped, but isn't merely time-sharing and will continue to play a role, mainly with mobile technology.

- There will be no input technology within the next ten years that will displace the keyboard and mouse.

Well, I already use an A4 WACOM graphics tablet with its wire-free battery-less pressure-sensitive tilt-aware stylus for long sessions with my many art applications. Even when it comes to something simple, like drawing a freehand circle, the mouse is just a joke. It also is a strain on my wrist, especially if held with my naturally dominant right hand to the right hand side of my extended keyboard. Things have improved since I got one of these:

image
as my mouse hand has a shorter distance to travel from the "home row" of keys to holding the mouse and back again. It also means that my wrist is not all uncomfortably twisted and giving me RSI. That said, I would like to see the back of Shole's QWERTY layout:

image
and have something that was no less efficient, but substantially easier to learn - like a straight alphabetic layout. Now, I've worked this out using some of the same optimisations as Dvorak, relabelled a keyboard with stickers and taught my OS to recognise the new layout so that the keyboard works across all applications. It is slower to type on than a Dvorak and slower for QWERTY typists (like myself) who are burdened with all the wrong habituated word-entry finger patterns, so I tend not to use it too much at the moment mainly because it puts all of the Command keys that I would use reflexively in strange places. Still, as part of a project to reinvent computing from scratch for neophytes I am happy with this part of the User Interface design.

So, mice are ok for generally manipulating a "desktop" with windows, making selections, clicking buttons, etc. as they work well in concert with the keyboard, the left hand of which can hold down meta-keys, etc. to extend the semantic richness of the actions of the 'pointer'. Yet, a graphics tablet, or rather the naturalness of a stylus trumps the mouse when it comes to an extended session with a Paintbox and text entry is only really needed when you come to name the file you wish to Save. This consequently implies that an Xbox 360 gamepad with a Chatpad extension would suffice for general consumer-oriented "Media Center" lounge computing. This is possible to link to the computer with the aid of a Microsoft Wireless Gaming Receiver for Windows and some hacky software drivers. Then it should be easy to plug the computer into your TV and then plug your digital camera into the computer to flip through its contents and use the Chatpad to name the events and places and the Pad to group selected photos under those headings for future retrieval, or even individually title and annotate just who is in them.

image

image
You just need to be careful that the pad you use with the computer doesn't sync (and turn on) the 360, but it does work. This is far richer than the Apple Remote (which I have nothing against as a design, but let's face it you couldn't use to "drop into" a gaming session from within a unified "Media Center" interface. Of course, this all assumes that you are running a nice swish PC that outshines a 360 and the games are all held on its hard-drive, but I think this 'lounge computing' is a valid concept even if I am not currently in a position to fully test it with my slow Mac Mini.

I must say that I am totally unconvinced by touch-screens. When I think how paranoid I get when people come and point at stuff happening on my display with their greasy finger, it doesn't matter if they say "I'm not going to touch it, I don't know what you are getting so worked up about" I would rather they take the mouse and jiggle that over the area of interest. Maybe it is because my display is hard to clean, but this sense in the industry that we will all be using Minority Report style UIs is just utter crap. Again RSI rears its ugly head as the posture required to use these active workspaces over-flexes the wrists and tires the arms. The 360 Chatpad may seem naff, but it is fine for simple file-naming tasks, which are practically the only thing neophytes ever need a keyboard for (and that is more of a requirement of the stupid file-system). You may say "What about email", but I can see video-email taking over in the relatively near future with a short message that might have been done via Twitter being done with more emotional nuance (yet retaining the qualities of store-and-forward messaging rather than the intrusive qualities of the "videophone"). A decent static image could be sent if you happened to have a bad hair day, but the ease of voice to ear would obviate most of the need for QWERTY-keyboard encoding. Your message would, of course, still need a subject line in machine parse-able text both for the benefit of the Spam filter and for the other use of your Chatpad - full-text index searching (this would find the content of emails impenetrable as they would just be sound files, but it would rapidly prioritise recently received video-messages with subject-lines containing those words.

I hold out no hope for direct voice input, whether it be for dictation or commanding the computing environment. Social factors are more the reason for this than failures of the technology (or the tiresome need to "train them"), as you lose all privacy in a workplace environment - you are bound to be 'overheard' dictating your resume as you look for another job whilst on your lunch hour, or the background noise will render such systems error-prone to prank interjections: "Computer! Delete ..." (you get the idea...) Commands work ok when there is no destructive impact to the system, so I think it may be ok in something like the Apple 3GS where you ask it to play songs by X, etc. rather than fuss with the touchscreen UI. This is probably because the technology works more reliably the fewer commands you get it to recognise and the more phonetically different each of these are. Tom Clancy's End War used this principle, but the "mission critical" nature of the RTS just led to me giving up on it due to comparatively rare failures, poor confirmation feedback and consequently intolerable correction latencies in a realtime system. So, in short, "voice control" has its uses, but they are extremely minor.

Actually, I think the most interesting input technology already exists and is the most passive:


It may be that Microsoft's Project Natal may be able to provide similar functionality. You can 'fake' desktop 3D with this head-tracking, with a hack like this one (no expensive shutter glasses and dual GPUs required, or futuristic lenticular displays):


Great for games, useful for 3D data visualisation and modeling applications, but I'm sure it will get "creatively over-used" by the developers of new desktop environments in incredibly irritating, disconcertingly flow-breaking, ways - just like real fonts on the Macintosh gave birth to a bunch of unrestrained Newsletters that looked akin to ransom notes.

Agreed. I don't see keyboard and mouse making way for voice and touchscreen, but I do see potential in supplementary peripherals.

- Social networking is a fad and its use shall have sharply declined in three years' time.

I agree with you on this. I think that far fewer people need computers than are being sold them. Really, we are into a situation where reasons have to be found to keep making PCs. It helps that every version of Windows and Office needs a faster CPU (and now a GPU) to run effectively; making the entire PC consumer base into tired hamsters running on a wheel just to stay in the same place. Snow Leopard bucks this trend a little (I was pleasantly surprised to find how much extra hard disc space I had after it had installed, but that is mainly due to it jettisoning the fat Universal Binaries that supported every Macintosh architecture of the last few years, but for the Intel Cocoa one. (It seems a little faster and its well worth 25, but I am hardly overwhelmed by it). However, the overall trend continues towards increasing cruft and bloat, almost as if there were a conspiracy at work to keep us upgrading our hardware to do pretty much the same things we did decades ago (email and WP).

Without Digital Photography and now Video Editing, I doubt the industry would have survived the 90s. Now that these applications are being directed into Communication tools, with ad hoc website creation becoming the norm (Myspace) there are directly appealing to more people - especially women.

I expect the next wave to be games related. Not merely the "you must buy this computer costing $$$ to play Crysis at Maximum Settings or be considered a loser by all your PC mates", but creatively with the maturation and increasing accessibility of the modding scene. Given that Little Big Planet and Halo 3 Forge and Animal Crossing keep the vitality of those titles on consoles by supporting customisable levels I can see that PC game developers will eventually realise that it is far better to get their users to build some of their content for them. After this it would be a short step to end-user AI scripting, then fully customisable game engines that mere mortals could use. Having to adopt an interpretive language (in order to avoid the boring Compile-Link-Run-Crash-Debug loop common to most PC game development) as, say, Unity does may well require faster hardware to manage with the convenience of a play-test/pause-debug interpreter in place of an orthodox C++ toolset. The main 3D game engine may remain in machine code (having been compiled by the game developer), but extremely large portions of the game would interface with this in the form of a scripting language.

Maybe this doesn't sound like a new idea at all, but what I'm saying is that this game creation may become as widespread as Little Big Planet level authors, or Halo 3 Forgers or even complete neophytes buying stuff from Tom Nook.

I think part of the success of Social Networking is its quasi-anonymity (like, who is Uncompetative?) and video may well wreck that, however, given that we already have people doing unspeakable things with their mobile phone cameras for the benefit of complete strangers... I'm not so sure. The thing is, being 39 I just don't think I am qualified to say what will be deemed hip with the young in 2012 (incidentally, the world is predicted to end that year anyway... so maybe that's one good thing that could come out of Armageddon - an end to Twitter).

RAKtheUndead:
I, like many technophiles, maintain an interest in a lot of hardware and software developments. Recently, however, I've noted a proportion of technological developments which I am assured will become dead-ends. With that in mind, I'd like to present a list of my own computing predictions.

- There will be such a drastic increase in computing power over the next five years that the software industry will be unable to keep up.

You could have summed up this entire wall of text with just stating Moore's law. Computing power increases exponentially...

RAKtheUndead:

- Cloud computing will not catch on within the next ten years, and will remain a niche application.

Ah, cloud computing. I've heard so much about this, with applications like Google Docs presenting office software over the internet. It's completely overrated. You know, it reminds me of something I've read about, something that was beginning to die out about the time I was born; a concept called time-sharing.

You see, back in the 1950s and 1960s, when electronic computers really started to come into their own, computers were hugely expensive devices, only within the financial reach of scientists, universities, businesses, governments and military organizations. They were crude, often accepting their entry through Hollerith punch cards and front-panel switches, and later through mechanical teletypes, which were loud, crude machines vaguely representative of a typewriter. The problem was that most of these control mechanisms only allowed one person to use the computer at once, and so, the idea of time-sharing was devised. With time-sharing operating systems, a single computer could be connected to by several terminals at once, and the computer would divide processing power to users through controls programmed into their applications. This persisted throughout the 1960s and 1970s, used with the increasingly powerful mainframes and minicomputers, to the point where hundreds of people could be supported on some of these computers at a time.

Then, during the late 1970s and 1980s, there was a development which drastically changed the face of computing. The development of the personal computer meant that people would no longer have to use a massive centralized minicomputer or mainframe for many of their applications, and time-sharing began to die out as the PC became more powerful. The personal computer was developed to get people away from the concept of a massive centralized computing facility.

And therein lies my objections to cloud computing. Right now, the computer I'm typing on has more power than the fastest 1980s supercomputer. My computer at home would be in the TOP500 list all the way to the mid-1990s. Why then, when we have computers that can do so much, would we willingly move ourselves metaphorically back in time to the idea of an centralized application server? I mean, it's not even like most of the consumer-targeted programs are any faster than the ones we have on our home computers. Indeed, because many of them are programmed in Java, and because our internet connections are generally so woefully inadequate for using a fully-featured office suite, these applications tend to be slower!

Places devoted entirely to creating and doing one thing are better than individuals servicing themselves. The reason why cloud computing WILL catch on soon. (Your estimation of at least ten years is a low estimation.) The idea IS correct, so please do not compare your computer to very old computers and imply that PCs are better.
I believe decentralized computing will be the next major utility. Electricity used to be personal, if you wanted power you had to make it yourself, computing is currently like this. However people discovered that by making areas devoted to electricity generation it made electricity cheaper than individuals did. This absolutely will happen to computing. Why is Google so popular? Because it would take your computer decades to do what Google's mega computing centers do in seconds. Just think of cloud computing as you having a screen and the computing box is off miles away with all the computing you need available whenever(Yes there is a limit but unless you tried to calculate PI on your computer nothing you did would be able to use all the CPU that the centers would have.)your screen shows the results, their computers do the work. Current connections are too slow to do it but in a short time companies will begin to use cloud computing since they will be able to afford the large investment cost.
They will switch. It will allow them to get rid of all of their troubleshooting teams, as the center will have their own and will be far better. It will be safer for them since the center can easily devote a large amount of processing to prevent security problems and viruses.

Cloud computing is the decentralization of computing,decentralization makes things more efficient and it will happen to computing. We have decentralized the most important things to us, food production, and electricity. Cloud computing is just the next utility.

The Singularity:
Cloud computing is the decentralization of computing,decentralization makes things more efficient and it will happen to computing. We have decentralized the most important things to us, food production, and electricity. Cloud computing is just the next utility.

So, using centralised servers which end-users have no control over is the decentralisation of computing? You have it the wrong way around, and even if you were to say it in the other way, centralisation of computing would still be a bad idea.

Have a quick think about what the majority of computer users use their computers for. Internet browsing, office software, graphics editing and multimedia. Already, we're at the stage when almost any off-the-shelf computer will perform all of these tasks with aplomb. Indeed, many computers right now are considerably more powerful than required to carry out these tasks. Why the hell would most people need to have any more power, and for that matter, how is this huge centralised set of servers, which have to be maintained and properly cooled, going to be cheaper than just buying a netbook?

As for companies, the larger companies already have a superior option to cloud computing in the form of the mainframe computer. Unlike mainstream servers, mainframe computers are designed specifically to deal with high parallelisation, high input and output rates and to maintain exceptional reliability throughout their lifetimes. There's nothing that the server complex can do that can realistically take on an IBM System z10 mainframe in the mainframe's territory. As far as I can see, you've ignored some of the basic problems with packing a whole load of computers into a small space, and ignored the fact that most of us already have enough power in a desktop PC to do what we want. We don't need cloud computing. I don't want cloud computing. I want optimised and more efficient software for the computers I already have.

 

Reply to Thread

This thread is locked