ipconfig /release /renew

screenshot ipconfig

Sous Windows, la commande Ipconfig affiche toutes les valeurs actuelles de la configuration du réseau TCP/IP et actualise les paramètres DHCP (Dynamic Host Configuration Protocol) et DNS (Domain Name System). Utilisé sans paramètres, ipconfig affiche l’adresse IP, le masque de sous-réseau et la passerelle par défaut de toutes les cartes.

ipconfig /release
Envoie un message DHCPRELEASE au serveur DHCP pour libérer la configuration DHCP actuelle et annuler la configuration d’adresse IP de toutes les cartes ou d’une carte spécifique si le paramètre Carte est inclus.

ipconfig /renew
Renouvelle la configuration DHCP de tous les cartes (si aucune carte n’est spécifiée) ou d’une carte spécifique si le paramètre Carte est inclus.

Les commandes sont entrées dans la fenêtre Command Prompt de Windows. Le recours à ces commandes est notamment nécessaire sous Windows Vista pour configurer un lapin Nabaztag si on n’arrive pas à se connecter sur l’adresse IP 192.168.0.1. Comme indiqué sur les pages help de Nabaztag, la séquence de commandes

ipconfig  /release
ipconfig  /renew

règle le problème.

WLAN, WiFi, WEP, WPA, WPA2, TKIP, PSK, AES

Logo de Wi-Fi Alliance

La norme IEEE 802.11 (ISO/IEC 8802-11) est un standard international décrivant les caractéristiques d’un réseau local sans fil (WLAN), appelé encore Wi-Fi ou WiFi, comme contraction de Wireless Fidelity.

Pour remédier aux problèmes de confidentialité des échanges sur les réseaux sans fils, le standard 802.11 intègre un mécanisme simple de chiffrement des données, il s’agit du WEP (wired equivalent privacy).

Le WEP est un protocole chargé du chiffrement des trames 802.11 utilisant l’algorithme symétrique RC4 avec des clés d’une longueur de 64 bits ou 128 bits.

Comme le WEP n’est pas suffisant pour garantir une réelle confidentialité des données, il convient d’utiliser le cryptage WPA ou WPA2 pour obtenir un niveau de sécurité supérieur.

WPA (WiFi protected Access) est une solution de sécurisation de réseau WiFi proposé par la WiFi Alliance, afin de combler les lacunes du WEP. Le WPA est une version « allégée » du protocole 802.11i, reposant sur des protocoles d’authentification et un algorithme de cryptage robuste : TKIP (Temporary Key Integrity Protocol). Le protocole TKIP permet la génération aléatoire de clés et offre la possibilité de modifier la clé de chiffrement plusieurs fois par secondes, pour plus de sécurité.

Le fonctionnement de WPA repose sur la mise en oeuvre d’un serveur d’authentification permettant d’identifier les utilisateurs sur le réseau et de définir leurs droits d’accès. Pour les petits réseaux, une version restreinte appelée WPA-PSK (Pre-shared Key), est mis en oeuvre déployant une même clé de chiffrement pour l’ensemble des équipements.

La Wi-Fi Alliance a créé en 2004 une nouvelle certification, baptisée WPA2, pour les matériels supportant le standard 802.11i. Contrairement au WPA, le WPA2 permet de sécuriser aussi bien les réseaux sans fil en mode infrastructure que les réseaux en mode ad hoc. Il s’appuie sur l’algorithme de chiffrement TKIP, comme le WPA, mais supporte également l’AES (Advanced Encryption Standard), beaucoup plus sûr.

Net Transport : a fast and powerful downloading manager

Last update : February 27, 2012
“Net Transport” (also called NetTransport or NetXfer) is a fast, exciting and  powerful downloading manager.

It supports the most pop Internet protocols, including: HTTP / HTTPS, FTP / over SSL (Secure Sockets Layer) / over SSH (Secure Shell), MMS (Microsoft Media Services), RTSP (Real-Time Streaming Protocol), BitTorrent, eMule, RTMP, RTMPE, RTMPT (Real Time Messaging Protocol).

The current version is 2.96c issued on August 4, 2011. Net Transport is shareware, the price is 29,95 US$, a 30 day trial version is available.

Limit my bandwith on Amazon S3

In 2006, a developer raised the question in the Amazon Discussion Forum whether there is a risk that the bandwidth cost grow beyond a level he was not willing to pay for. AWS answered that such a feature is in the work. The plan was to enable users to cap how much they are charged each month.

Three and a half year later, this is still a plan. The latest message from AWS was : later this year (2009) or early next year (2010).

Simple DB : Amazon database

Today I installed a simple database on the Amazon Webservices. Amazon SimpleDB is a web service providing the core database functions of data indexing and querying in the cloud. This service is available in Europe since a few weeks. This allows to achieve lower latency, operate closer to other resources like Amazon EC2, Amazon S3, and Amazon SQS in the EU Region, and help meet EU data storage requirements when applicable.

Simple DB is simple to use, low touch, scalable, highly available, fast, flexible, inexpensive and designed for use with other Amazon Web Services.

The prices are:

  • First 25 Amazon SimpleDB Machine Hours consumed per month are free
  • $0.154 per Amazon SimpleDB Machine Hour consumed thereafter
  • First 1 GB of data transferred in per month is free
  • $0.100 per GB – all data transfer in thereafter
  • First 1 GB of data transferred out per month is free; thereafter:
  • $0.170 per GB – first 10 TB / month data transfer out
  • $0.130 per GB – next 40 TB / month data transfer out
  • $0.110 per GB – next 100 TB / month data transfer out
  • $0.100 per GB – data transfer out / month over 150 TB

When using Amazon SimpleDB, you organize your structured data in domains within which you can put data, get data, or run queries. Domains consist of items which are described by attributename-value pairs. The spreadsheet model shown in the following image explains the structure:

Amazon Simple DB Data Model

Amazon Simple DB Data Model

Amazon CloudFront

update : October 18th, 2011
Amazon CloudFront is a web service for content delivery. It integrates with other Amazon Web Services (mainly Amazon S3) to give developers and businesses an easy way to distribute content to end users with low latency, high data transfer speeds, and no commitments.

Amazon CloudFront delivers the content using a global network of edge locations. Requests for  objects are automatically routed to the nearest edge location, so content is delivered with the best possible performance. Amazon CloudFront works seamlessly with Amazon Simple Storage Service (Amazon S3) which durably stores the original, definitive versions of the files.

In Amazon CloudFront, objects are organized into distributions. A distribution specifies the location of the original version of the objects. A distribution has a unique CloudFront.net domain name  that  can be used to reference an objects through the network of edge locations. It’s also possible to map an own domain name to a distribution.

Amazon CloudFront is

  • fast
  • simple
  • cost-effective
  • elastic
  • reliable
  • global
  • designed for use with other Amazon Web Services

The price depends on the edge location and the volume transferred. The mean price per GB for low volumes is about 0,2$, for high volumes about 0,1$. A simple monthly AWS bill calculator is provided by Amazon. Normal fees will apply for Amazon S3 usage, including “origin fetches” – data transferred from Amazon S3 to edge locations.

The edge locations in Europe are:

  • Amsterdam
  • Dublin
  • Frankfurt
  • London

Amazon CloudFront is designed for delivery of objects that are frequently accessed – “popular” objects. Objects that aren’t accessed frequently are less likely to remain in CloudFront’s edge locations’ caches. Thus, for less popular objects, delivery out of Amazon S3 (rather than from CloudFront) is the better choice. Amazon S3 will provide strong distribution performance for these objects, and serving them directly from Amazon S3 saves the cost of continually copying less popular objects from Amazon S3 to the edge locations in CloudFront.

I activated my Cloudfront account on November 1st, 2010.

A recent tutorial how to install Amazon CloudFront has been posted by Michael Tieso on the website “Art of Travel Blogging”.

OpenID

OpenID eliminates the need for multiple usernames across different websites, simplifying your online experience. A user can choose the OpenID Provider that best meets his needs and that he trust. The user can keep his OpenID no matter which Provider he moves to. The OpenID technology is not proprietary and is completely free.
OpenID is growing quickly and becoming more popular as large organizations like AOL, Facebook, France Telecom, Google, LiveDoor, Microsoft, Mixi, MySpace, Novell, Sun, Telecom Italia, Yahoo!, etc. begin to accept and/or provide OpenIDs. Today, it is estimated that there are over one billion OpenID enabled user accounts with over 40,000 websites supporting OpenID for sign in.
OpenID was created in the summer of 2005 by an open source community (the father of OpenID is Brad Fitzpatrick) trying to solve a problem that was not easily solved by other existing identity technologies. As such, OpenID is not owned by anyone, nor should it be. Today, anyone can choose to be an OpenID user or an OpenID Provider for free without having to register or be approved by any organization.
The OpenID Foundation was formed to assist the open source model by providing a legal entity to be the steward for the community by providing needed infrastructure and generally helping to promote and support expanded adoption of OpenID.

Two directories are available to see where OpenID can be used to login :

Web cache

last update : August 16th, 2011
A Web cache sits between one or more Web servers (also known as origin servers) and a client or many clients, and watches requests come by, saving copies of the responses — like HTML pages, images and files (collectively known as representations) — for itself. Then, if there is another request for the same URL, it can use the response that it has, instead of asking the origin server for it again.

Web caches are used to reduce latency and to reduce network traffic. A very useful tutorial about web caches has been published by Mark Nottingham under a creative common licence.

An appreciated tutorial about wordpress caching has been posted by Kyle Robinson Young in Web Development Tutorials.

A cached representation is considered fresh  if it has an expiry time or other age-controlling header set and is still within the fresh period, or if the cache has seen the representation recently, and it was modified relatively long ago. Fresh representations are served directly from the cache, without checking with the origin server.

HTTP headers are sent by the server before the HTML, and are only seen by the browser and by any intermediate caches. Typical HTTP 1.1 response headers might look like this:

HTTP/1.1 200 OK
Date: Fri, 30 Oct 1998 13:19:41 GMT
Server: Apache/1.3.3 (Unix)
Cache-Control: max-age=3600, must-revalidate
Expires: Fri, 30 Oct 1998 14:19:41 GMT
Last-Modified: Mon, 29 Jun 1998 02:28:12 GMT
ETag: "3e86-410-3596fbbc"
Content-Length: 1040
Content-Type: text/html

The Expires HTTP header is a basic means of controlling caches; it tells all caches how long the associated representation is fresh for. After that time, caches will always check back with the origin server to see if a document is changed. Expires headers are supported by practically every cache. One problem with Expires is that it’s easy to forget that you’ve set some content to expire at a particular time. If you don’t update an Expires time before it passes, each and every request will go back to your Web server, increasing load and latency.

HTTP 1.1 introduced a new class of headers, Cache-Control response headers, to give Web publishers more control over their content, and to address the limitations of Expires. The most important response is max-age=[seconds]. It specifies the maximum amount of time that an representation will be considered fresh. This directive is relative to the time of the request, rather than absolute. [seconds] is the number of seconds from the time of the request you wish the representation to be fresh for.

A tool (REDbot) to check Cache Control and HTTP headers has been made available by Mark Nottingham; a public instance is available at rebdbot.org.

Small Java Web Server

Paul Mutton, who graduated at the University of Kent at Canterbury in 2001 with first class honours in Computer Science (BSc Hons), created  a very small standalone web server written in Java (less than 4 KByte). A more advanced version has less than 10 KByte and is called Jibble Web Server.

The softwares are OSI Certified Open Source Software, available under the GNU General Public License (GPL). Commercial licences are also available.

Other useful Java projects from Paul Mutton are :