Más razones para usar los estándares

En Web Standards Design señalan algunos de los beneficios que se obtienen al emplear estándares:

The Benefits

Improved Page Load Times
Spaghetti code, nested tables and outdated markup can triple the bandwidth required for even the simplest of websites. This means that users are forced to wait longer for your pages to load – increasing the chance that they’ll bailout before they’ve even seen your logo.

Improved Bandwidth Usage
Operating costs are inflated as you pay your hosting company to serve one meaty 60K page, when they could be serving three streamlined 20K pages instead. Hard numbers are hard to come by, but in general if a site reduces its page weight by 35%, it reduces its bandwidth costs by the same amount. An organisation spending £80,000 a year would save £28,000 a year.

Do you create a new version of your site every time a new browser or device is released? Designing and building with web standards lowers production and maintenance costs. Make design changes in hours, not weeks.

Forwards Compatibility
When designed and built the right way, any document that is published on the web can work across multiple browsers, platforms and devices.

Backwards Compatibility
Because standards are inclusive by nature, standards-based design accommodates people who use older browsers and devices. Even those that are yet to be built or even imagined!

Comply with accessibility laws and guidelines without sacrificing beauty, performance and sophistication.

Support for Non-Traditional Devices
Adhering to standards allows organizations to accommodate a variety of non-traditional browsing devices – from PDAs and mobile phones to Web TV and Videogame Consoles to Braille and Screen-readers.

Repurpose Documents
Separating style from structure and behaviour facilitates the repurposing of web documents. Need that HTML file in PDF, CSV or Text format? No problem.

Ingeniero del software será la profesión número 1 en Estados Unidos durante 2011

Software Engineering is the #1 Job in the United States in 2011:

The software engineering / software development / programming profession often receives high rankings in various career surveys. Although I do think there are some differences in these various job titles, they’re used interchangeably enough that I intentionally mix them here. Last year, CareerCast ranked Software Engineering as the #2 job in the United States for 2010 and I blogged on that then. This year, Software Engineering has moved ahead of Actuary to take the #1 spot on CareerCast’s rankings for top job in 2011. As with last year, there is an article in the Wall Street Journal based on this study.

It is interesting to see why Software Engineering moved to the #1 spot for 2011. The actuary profession apparently saw reduced overall outlook and greater stress due to concerns about the insurance industry and potential regulation. Perhaps even more interesting is the reasons for the positive momentum of software engineering. The study called software engineering job market “broader and more diverse” thanks primarily to cloud computing and mobile device development. The greater number and variety of positions leads to greater potential for an industry and reduces the competitiveness factor. It’s just a study subject to mistakes and weaknesses of generalization, but it was particularly interesting to see that the stress factor for Software Engineering improved from 25th best to 15th best. That certainly fits the theory that more positions means less stress.

Not surprisingly, the study includes the statement that software engineering and other top careers “require proficiency in math, science or technology, and all of them require higher education or specialized training.” Besides Software Engineering being #1 in this study, Computer Systems Analyst is ranked #5.

The article and its methodology are interesting enough, but the feedback comments have value of their own. Some are ridiculous, but worth a laugh. The contention about what is a software engineer versus a programmer versus a coder is the same old arguments, but there is a certain satisfaction in seeing that some things never change.

In the end, I’ve never believed that I need a study to tell me if I have a good job or good career. That being said, I have always felt that software engineering is a reasonably good choice of career when looking at the ratio of reward to effort. There are few careers that I’m aware of with the type of compensation for a bachelors degree (typical educational level of software engineers though some have more and some have less formal secondary education). Most importantly, most software engineers I know do find satisfaction from their work.

Y el artículo al que hace referencia: The 10 Best Jobs of 2011.

¿Cuáles son las diferencias entre HTML4 (XHTML 1.0) y HTML5?

El W3C ha preparado un documento, HTML5 differences from HTML4, en el que detalla las principales diferencias entre HTML4 (y su variante según XML, XHTML 1.0) y la nueva versión del lenguaje, HTML5.

Algunas de las principales diferencias son:

  • HTML5 define una sintaxis que es compatible con HTML4 y XHTML 1.0. Por tanto, un salto de línea se puede escribir como <br> (HTML4) o <br /> (XHTML 1.0).
  • Para definir el juego de caracteres se introduce un nuevo atributo para la etiqueta <meta>:
    <meta charset=”UTF-8″>
    aunque todavía es posible utilizar el método tradicional:
    <meta http-equiv=”Content-Type” content=”text/html; charset=UTF-8″>
  • Se simplica el DOCTYPE:
    <!DOCTYPE html>
  • HTML5 permite incluir elementos de SVG y MathML.
  • Se introducen nuevos elementos, como: section, article, aside, header, footer, etc.
  • Se introducen nuevos atributos, como: media, charset, autofocus, placeholder, etc.
  • Algunos elementos cambian, como: a, b, i, menu, etc.
  • Algunos atributos cambian, como: type, name, summary, etc.
  • Algunos elementos desaparecen, como: basefont, big, center, etc.
  • Algunos atributos desaparecen, como: align, background, bgcolor, etc.
  • Mejora de las API, como: getElementsByClassName() y innerHTML.

Entrevista a Tim Berners-Lee

En The Infinite History del MIT podemos encontrar una entrevista de Tim Berners-Lee de ¡86 minutos! En ella podemos conocer los orígenes de la Web de mano de su padre:

INTERVIEWER: Can you walk us through the story of the origins of the World Wide Web and HTML and how that thought process developed?

BERNERS-LEE: So I went for 2 years, I ended up there for 10 years. And during that time I had a number of projects in which I had an idea about how things could work. I put it out there and I needed volunteers from other groups to work on it. CERN didn’t have a very centralized hierarchical management structure because people came from different universities. Physicists came having designed pieces of equipment. And of course they had to collaborate very well because each piece of equipment had to fit together. Eventually be lowered down a few 100 meters below the surface and then work out of other extreme conditions. But all the same, it was not a military like place so people arrived with different computers. They used different documentation systems. So when you wanted to know what was going on you’d have to find– you’d typically have to be introduced to the person. So the coffee areas were really important. They still are and of course the coffee areas still are important, but at that time they were crucial because talking about things you’d get introduced to the people who’d written other pieces of the system or designed different pieces of the hardware. And then when you’d nailed them you’d try to remember their face and try to get a clue as to where they might’ve buried the documentation. What system it would be on. So back then, this is 1989 when I proposed the web. I’d thought about it for years beforehand. We got to the stage where computers were running different operating systems. There were Unix-based computers, VAX/VMS based computers and different flavors of Unix. Now there was a mainframe computer running its own operating system. So there were different flavors of software, different flavors of hardware. They were actually connected. The internet was just starting to become available. Although the people didn’t in practice transfer files very much from one computer to another. You could if you knew how. So if you knew somebody had written a document that you wanted and it was on another computer and as they were both connected to some sort of network with enough research and installing enough bits of program you could install things like Telnet, the remote login program on both and you could Telnet over to another system and run some programs there which would allow you to root about for the information. Eventually you could transfer it back to the terminal you were using. So that wasn’t really a great way to get information. But on the other hand, there was such a potential. There was so much information, which was actually sitting there on desks, going around and around, carefully prepared by somebody– lovingly prepared. Documentation of the part they had been working on for the last five years. Lovingly written up. With references to other documents, which again, we’d have to go through the same process to find. So once you had the idea that actually this could all be part of one virtual documentation system in which you just click when you want to follow a reference then it becomes pretty compelling. It was difficult to explain to people because the world was a paradigm shift. The idea is that you could get that any document could be available variable in a click was just too difficult to explain. Only a few people got it, but that was enough, a few time. Each time somebody would talk about it a few people would get it. The others would go away shaking their heads.


INTERVIEWER: Yeah, you can totally hide here. okay, so as this is going along you were increasing or you came to understand that there needed to be standards. And that led you actually to come to MIT, can you tell me how that happened?

BERNERS-LEE: Well the whole design of the web is standard. The reason it works, the whole initial architecture diagram of the web shows that you’ve got different servers, but the big connection bus, that the lead servers connected and different people, browsing clients, people using the data connecting into the top of it. And the connection buses is the fact that they all connect in exactly the same way. That each computers talks the same languages when you’re browsing and your computer asks the server somewhere for a webpage and pulls that in. The fact that they’ve all got to use the same language, HTML, that’s really important. That’s why it works. Now HTML started off as a really simple language. It was a one page specification. I just wrote it as I coded it up. Everything with the http protocol. The whole idea of having these names for each document that sometimes start http:, those are URLs or URIs. Those are the three pieces of the architecture of the web. The really important thing was that all the web browsers and servers speak the same languages. Initially, when it was just me, nobody looking over my shoulder, it was easy. I could just write them up. And I had a little webpage about HTML. A little webpage about http. As time went on and I got a visit while I was still at CERN from four people from Digital Equipment Corporation, among them Alan Kotok. Alan Kotok is, I didn’t know that he was very much a tremendously an MIT allowance, tremendous MIT enthusiast. But he worked for Digital Equipment at the time and he explained that the company was preparing to revise its whole product strategy because of the internet and the web. And the people he brought over were part of a committee, which was planning the response to the internet and he said, I believe that the system works and the specifications for the system are on some sort of disk somewhere in your office, I understand. So we’re a large computer company, we’d like to be involved in the future of those specs. We’d like to be able to think about what features they need in the future. We’d like to make sure about that. We’re concerned about the stability. So I asked him about what he thought would be a good way forward and he said, well for example, making a consortium. Like for example, the X consortium, which had looked after the X Window specifications. And I asked him, what form it take. And he said, well for example, you could base it somewhere like MIT. He said the X Windows consortium was based at MIT that it worked out very well for Digital. So something like that would work. That visit was followed by a number of months, a year or two for me. I did come over to MIT. Went to the lab for computer science for a month. I went to the West Coast, to Xerox Park. Stayed there for a month or so, a guest of Larry Masinter. And looked around. Went to see NCSA where Marc Andreesen was working on the Mosaic browser. And looked at different models for different platforms of consortium and so on. And that decided that yeah, Alan was right. When I talked to MIT and Al Vezza, Michael Detourzos knew how to do it. In fact, Michael was extremely supportive. Michael went to all trouble to meet me in Switzerland when he was on a trip back to Athens. We met in Zurich and in fact, I’d gotten his name from David Gifford. David Gifford I met at a networking conference in the north of England somewhere in a rainy day when we had to get a bus from one part of the country to another. I sat next to this guy, professor from MIT and he asked me, so what are you going to do with this web thing then? And I said, well I didn’t really know. He said, well you should talk to Michael Detourzos at the Lab for Computer Science. mld@lcs.mit.edu. So I scribbled that down and said thanks. And I think David had previously just discussed this sort of thing with Michael. The fact there was a fairly deliberate plant for that e-mail address. I’ve had a lot of support from David and Michael turned out to be great. A huge person, large as life and twice as natural. And also very supportive. And very supportive of not only of doing it at MIT, but also making it into international thing. Making sure that it would have a leg in Europe because I really didn’t want to abandon Europe. That was very important to me and he talked to me and he saw completely eye to eye with me on that.

INTERVIEWER: Can you elaborate a little bit about what it was about MIT that seemed to make it the right place to do the consortium?

BERNERS-LEE: Well, for one. MIT was a place full of interesting people. So it was important for me to be somewhere where I could chat to people and I already had spent an amount as a guest of [? Camp Sullins ?] who would be working on network names and names in the network and things. I gave a talk. In fact, somebody came across a copy of the announcement of that talk, which I gave back in ’92. So full of interesting people. But more than that when it came to running the consortium they’d run the X Consortium. Al Vezza had put together the contracts for the X Consortium. He was prepared to do it again. So long as it looked pretty much like the X Consortium. If I wanted to make it something more like the United Nations he did not know how to do that. But if I wanted to make an industry consortium based at MIT then he did know how to do it. What was important for the consortium, is that from the industry point of view it should be neutral. It should be a place where different members of the industry can come together and talk about the future in such a way that they knew there was no inherent bias towards one of their products. So MIT, as an academic institution, and having done that very well for the X Windows system before that I think that was one of the things that MIT could produce. But also, it was the clout, there was a reputation. The fact that Al could pick up the phone, call five major computer companies and say, we’re doing a web consortium, are you in? And they call him back and say, yes. But it wasn’t instant as that. But it took a few months, but I think having it somewhere that was credible, with an international reputation as MIT was really important. Getting industry onboard, getting all of the industry onboard. Not just getting the people who happened to have come to MIT onboard. And also making it clear that the fact that there was going to be one web was really important. It had to be really good design. It had to be really fair between different industry. It also had to be very technically good and it had to be developed really rapidly. Because in those days the web products were turning over extremely rapidly. So it had to work faster than any of the consortia or any of the standard bodies that had worked in the past. So when MIT stood up and said that they were going to do that then people believe it and it happened.

¿Qué es un captcha?

Un par de vídeos que he preparado sobre qué es un captcha, sus orígenes, su uso y su futuro:

[kml_flashembed movie="http://www.youtube.com/v/VUPSg8Jp_Es" width="560" height="315" wmode="transparent" /]

[kml_flashembed movie="http://www.youtube.com/v/ma0sm4kE0v4" width="560" height="315" wmode="transparent" /]