David Mentré's blog

Last entries

Thu 28 Aug 2014

DNSSEC Validator supporte DANE !

Après une mise à jour récente de Firefox, j'ai découvert avec plaisir que le plug-in DNSEC Validator (qui s'appelle maintenant DNSSEC/TLSA Validator) supporte le protocole DANE. Avec ce plug-in, vous pouvez voir que mon domaine www.bentobako.org est correctement vérifié par DNSSEC (et pas par DANE). Ce plug-in est disponible pour les principaux navigateurs du marchés (Internet Explorer, Mozilla Firefox, Google Chrome, Opera et Apple Safari)

DANE (DNS-based Authentication of Named Entities), reposant sur DNSSEC, permet de valider un certificat SSL/TLS d'un serveur web (pour HTTPS), email ou autre sans besoin d'utiliser des « autorités » de certification qui font payer très cher et à la confiance douteuse. En d'autres termes, on n'utilise pas de PKI (Public Key Infrastructure, Infrastructure de clés publiques) mais directement des signatures des certificats dans les champs du DNS, champs eux-mêmes vérifiés pas DNSSEC (on peut aussi combiner PKI et DANE si on veut).

Chacun est maître de son infrastructure et permet à ses « clients » de vérifier qu'ils accèdent bien au bon serveur web, en toute sécurité. C'est à mon avis une petite révolution dans les moyens permettant de chiffrer et authentifier les communications sur Internet : la sécurité gérée de manière décentralisée et à coût zéro. Bien évidemment, le protocole n'est pas encore largement déployé, d'où mon intérêt pour le plug-in DNSSEC/TLSA Validator.

Pour plus de détails sur DANE, lire les excellents comptes-rendus des RFC 6394 et RFC 6698 de Stéphane Brotzmeyer.

Il ne me reste « plus » qu'à configurer tout ça sur mon serveur web. Quelqu'un connaît-il une documentation simple pour faire sa propre autorité de certification avec en plus la configuration de DANE ?

Mon 25 Aug 2014

Améliorations des sauvegardes

J'ai récemment acheté un petit NAS pour faire mes sauvegardes en remplacement de mon vieux PC qui prenait une place folle et consommait comme une vache : un QNAP TS-112P. Plusieurs raisons pour ce choix:

  • un bon WAF (Wife (& husband! ;-) ) Acceptance Factor) car il est petit, joli et blanc ,
  • son aspect écologique, consommation de 7W seulement, peut-être un peu plus en fonction du disque dur, un Toshiba DT01ACA200 2 To,
  • et surtout un bon support Debian!

J'ai donc installé une Debian Wheezy 7.6 dessus en suivant les instructions de Martin Michlmayr pour les QNAP TS-11x/TS-12x (le TS-112P n'est pas officiellement listé, mais Martin m'a confirmé que cela marche sans problème). Grâce à un version récente de qcontrol, (quasiment) tout le matériel est pris en charge : les leds, la vitesse variable du ventilateur, etc.

J'ai ensuite installé BackupPC, mon logiciel de sauvegarde préféré. Et refait la configuration pour sauvegarder mon serveur sur Internet, un autre NAS et un PC Portable Windows.

Tout semble marcher correctement, on se sent plus tranquille avec une bonne sauvegarde. :-)

Wed 20 Aug 2014

HTTPS pris en compte pour la qualité d'un site web par Google

Le 6 août dernier, Google a annoncé qu'il favoriserait (faiblement) un site web en HTTPS dans son algorithme de recherche, par rapport à un site web standard sans HTTPS. Le but clairement affiché est d'inciter les site web à passer en HTTPS.

Deux remarques :

  • Je pense que c'est une bonne chose, tous les sites web devraient être en HTTPS. Cela gênerai (sans empêcher) les espionnages par la NSA et autres agences de renseignement ;
  • Un seul gros acteur du web peut influencer le comportement de nombreux autres sites. C'est glaçant (sans être d'une grande nouveauté).

Quelques remarques pertinentes aussi dans les commentaires du billet.

Au passage ce blog est aussi disponible en HTTPS, malheureusement avec un certificat auto-signé et une configuration faite avec les pieds.

Sat 04 Jan 2014

Server upgrade to Wheezy: beware of Dovecot!

Yesterday I upgraded my Debian server from Debian 6 (Squeeze) to Debian 7 (Wheezy). Overall it went fairly well, most probably because I don't use that much software. Another reason is that two main packages I use, Nginx and PostgreSQL, were drawn from Squeeze backports so they were close to Wheezy version.

Having the important upgrade notes of all packages at the very beginning of the upgrade was very helpful.

I had nonetheless two big issues with Dovecot and PHP as CGI.

Dovecot

I had to upgrade from Dovecot 1.x to 2.0 configuration file. Dovecot 2 is supposed to be able to read Dovecot 1 configuration file but it did not work for me. First of all, I had to fix the import of the SSL certificates (easily done with help from README.Debian.gz). Secondly, I use non-standard ports and I was not able to easily fix it.

Overall, it was much easier to write a new Dovecot 2 configuration file from scratch. Using doveconf -c -n (also mentioned in README.Debian.gz) was very helpful to get the items to modify/add.

I don't see what Debian developers could have done better, the issue was at least well documented.

PHP as CGI

I am using Nginx web server so I had a custom init.d script to launch PHP as Fast CGI, Nginx and PHP communicating through a Unix socket. I don't know why but my PHP as CGI set-up was broken after the upgrade.

I easily fixed this issue by installing php5-fpm package and using the proper socket (/var/run/php5-fpm.sock) for the Nginx to PHP link. My server configuration is thus more standard and easy to maintain. Good! :-)

Feature wish for Debian 8

For next Debian, it would be useful to have a script that scans the installed packages and prints some notes telling if the upgrade can be done automatically or need manual intervention (and why, pointing to some further documentation to read). It would be very useful to know issues before starting the upgrade.

Tue 17 Dec 2013

Book review: Better Embedded System Software

better_embedded_system_software_cover.gif Better Embedded System Software is a very good book if you write software, and not only embedded software!

I discover this book when following the Toyota Unintended Acceleration case where Philip Koopman, the author of this book, was a plaintiff's expert. Philip Koopman is an Associate Professor at the Carnegie Mellon University.

Why is this book so good? Because it explains in daily words what should be the ideal software development process. And not only it details the ideal process, but it gives concrete, down to earth advices on how to improve your current process and software development habits.

The book is divided into 30 small chapters, following the order of the usual V cycle (overall process, requirements and architecture, design, implementation, verification and validation, critical system properties). The chapters are very short, about 10 pages, and relatively independent. This one of the great point of the book: it is easy to read a chapter, there is not need to allocate a long time slot for it. You can pick the chapter that is most interesting to you. And as the chapters are right to the point, you immediately get useful advices and you can immediately start to apply them on your own development.

The other great quality of this book is that the author has a strong background in embedded software development. Therefore the advices are realistic and graduated. The author knows that you are going to find barriers and limitations in your work environment and he helps against them. For example, there are two chapters on producing some documentation but not too much. Even if you cannot apply the whole set of advices, you nonetheless get some ideas on own to improve your current software and what could be done in later steps.

I am not an expert on all the topics presented in this book (that's why I bought it!) but for the domains I knew (e.g. concurrent programming), the advices seem balanced and appropriate.

Of course, 10 pages for a chapter is very short and some subjects are so wide that they would deserve a book on their own (e.g. safety and security). In that case, Koopman's book give useful pointers to continue your reading and the summary he gives is an excellent introduction to the topic.

As I said many times, we are currently making very bad software and we should improve this situation. Better Embedded System Software is the one the very few books that you should keep close to your table and consult on a regular basis.

If you cannot afford the book, some quite detailed slides on Avoiding the 43 Top software risks have been made available by Philip Koopman.

Sun 01 Dec 2013

The day I left GMail

On Saturday 2013-11-30 at 16:06 I left GMail. Now all (or at least most of them) of my emails are sent to my own email server based on Postfix, Dovecot and a Debian server.

It is increased burden to have to manage such a server. It was much easier for me to let Google administrators handle all the issues. But now at least I know where my emails are stored (in France) and how they are handled. I will less fear to see my Google GMail emails read by American spy agencies through PRISM program. Or at least, it will be a little more difficult for those agencies to access them. Hopefully, I won't have too many administration issues with this server.

Introduction aux méthodes formelles actualisée

Comme en 2010 et 2011, j'ai faire une intervention de 2h pour présenter les méthodes formelles à l'ESIR, école d'ingénieur de Rennes 1. J'ai un peu actualisé la présentation, je pense qu'elle est plus illustrée et j'ai introduit quelques démonstrations d'utilisation des outils. La présentation est disponible en PDF ou en source au format PowerPoint (licence Art Libre, sauf pour les illustrations, que j'ai pioché à droite et à gauche sur le web).

Note pour les libristes : la plupart des logiciels mentionnées sont libres.

Fri 15 Nov 2013

Présentation de l'internationalisation dans MapOSMatic

Lors du DevCamp de ce mercredi 13 novembre à la Cantine numérique Rennaise, je faisais une courte présentation de l'internationalisation (i18n) dans MapOSMatic : comment c'est fait en pratique dans le code, le processus de traduction mis en œuvre et les résultats obtenus. Les transparents sont disponibles sous formes de sources LibreOffice ou en version PDF.

Wed 13 Nov 2013

Mozilla published a guide to help configure TLS on web servers

In a blog post called "Navigating the TLS landscape", Mozilla announced its Security/Server Side TLS guide.

The main objective of this guide is to help SysAdmin configure their web server in order to improve security for web server clients. The guide gives the preferred configuration as well as justification for choices made, which is a very good thing. There is a strong emphasis on forward secrecy. Configuration parameters for several web servers are provided (Nginx, Apache, Haproxy, Stud, ...). It also provide some tips to check the configuration.

Next step: apply it on my own server!

Thu 07 Nov 2013

Carte des zones d'exclusion autour des centrales nucléaires françaises

Suite à l'accident de Fukushima, je m'intéresse un peu plus aux centrales nucléaires et à leurs risques. J'avais vu une carte de zones d'exclusions de 100 et 150 km autour des centrales japonaises. J'ai refait cette carte pour les centrales nucléaires françaises avec des zones d'exclusion de 30, 100 et 150 km.

30 km est la zone d'exclusion autour de Tchernobyl. Pour Fukushima, on a des poches de radioactivité importante au moins dans un rayon de 100 km.

Peu de villes moyennes sont à plus de 150 km d'une centrale. En Bretagne, Rennes et Brest le sont mais à Brest il y a les bombinettes nucléaires de nos sous-marins tactiques, donc je ne suis pas sûr que ce soir beaucoup mieux. :-]

Merci à OpenStreetMap pour le fond cartographique et Leaflet pour la carte glissante. ;-)

Introductory slides on Frama-C

I recently made at work slides to present Frama-C: what it is, the various plug-ins and how to use the two main ones, Value analysis and WP. The examples used in the slides are also available.

The attached version is updated after fixing minor issues found by frama-c-discuss mailing list readers.

Enjoy! (and let me know if you find them useful)

Fri 05 Jul 2013

Critique livre : Éloge de l'oisiveté

Bertrand Russell (image Wikipédia) Ce petit livre (32 pages, éditons Alia) est ancien, 1932, mais toujours étonnamment d'actualité. Bertrand Russell, le célèbre logicien, y expose ses idées sur le travail, l'utilité du travail pour la société et comment procurer plus de bonheur à tous.

Pour parler sérieusement, ce que je veux dire, c'est que le fait de croire que le TRAVAIL est une vertu est la cause de grands maux dans le monde moderne, et que la voie du bonheur et de la prospérité passe par une diminution méthodique du travail.

La réduction du temps de travail ! Voilà l'objectif de Bertrand Russell. Et il n'y va pas de main morte puisqu'il préconise 4h par jour. :-) Mais au-delà de cette proposition, Russell explique pourquoi travailler peu est nécessaire : pour avoir du temps pour des loisirs, pour développer des activités créatives, aussi bien artistiques que scientifiques, pour s'éduquer, etc. En résumé, pour faire de l'homme un être actif pour le bien de la société et pas seulement un être passif, juste bon à s'abrutir au travail et consommer sur le peu de temps qu'il lui reste.

1932 ! Il y a 80 ans, on parlait déjà de ces questions et je n'ai pas vraiment l'impression qu'on est vraiment progressé depuis. À une heure où l'automatisme est croissant, où la productivité est sens cesse améliorée, ou certains sont surchargés de travail alors que d'autres en demandent, nous devrions collectivement nous interroger sur la façon de répartir équitablement travail mais surtout richesse dans notre société ? Peut-être peut-on voir au-delà de l'horizon néo-libéral (bouché) qu'on nous présente que seule voie possible ? Bertrand Russell nous invite à y réfléchir intelligemment et de manière compréhensible.

La bonté est, de toutes les qualités morales, celle dont le monde a le plus besoin, or la bonté est le produit de l'aisance et de la sécurité, non d'une vie de galérien. Nous avons choisi, à la place, le surmenage pour les uns et la misère pour les autres : en cela, nous nous sommes montrés bien bêtes, mais il n'y a pas de raison pour persévérer dans notre bêtise indéfiniment.

Sat 29 Jun 2013

WE programming idea: opportunistic secure email exchanges

A long time ago, a French computer science magazine proposed programs ideas that ranges from a few hours to a complete WE. Here is an idea to elaborate on, even if it might take a little more than a WE to implement it fully. ;-)

Observation: secure email exchange with OpenPGP or S/MIME does not work

Like many others, I have tried to exchange secured (encrypted and strongly authenticated) emails with friends and other people, in my case in OpenPGP format using GnuPG free software. But, like many others, I have stopped because it simply does not work.

Why? Probably for several reasons:

  • One need to understand at least the basic principles of asymmetric cryptography: public and private keys. It is not that complicated (if you don't go into the fine details ;-) ) but it is probably already too much complicated for the average user;
  • One need to make ones key, load it into email program. If one has several computers, one needs to do this for each one of them. Making the key adds complicated steps. Loading it on each computer is cumbersome.
  • If you want to participate in the "web of trust" (for OpenPGP emails), you need to let your key signed by other people and sign other people keys. Once again, this is very complicated to understand for the average user;
  • Even if you don't want to participate in "web of trust", you need to check the fingerprint of your correspondents to gain strong authentication. Once again, a complicated step to understand and do;
  • Even if you have done all of this and understand it, each time you want to send an email you need to enter the password to unlock your private key. This is annoying.

Regarding S/MIME, you have overall the same complications. It can be a little simpler but as you need a Public Key Infrastructure (PKI), S/MIME usefulness is limited to a single administrative entity managed by trained system administrators, in other words a big company.

A proposal: opportunistic secure email exchange

The basic approach is pretty simple: make a plug-in to some email programs. The first time the plug-in is installed, it automatically creates a public and private key couple for each email address used by the user.

Then, each time a user A sends an email, the public key attached to A's email address is automatically sent with the email. Therefore, if the user communicates with another person B using the same kind of plug-in, the receiver detects that A is capable of using secure emails. At next email from B to A, the plug-in automatically attaches its own public key.

Therefore, after two emails exchanges between A and B, they both have the public key of the other person and thus can both exchange secure emails. When one sends an email, by detecting we have the public key of the correspondent, the email programs would automatically encrypt and sign the email.

Of course, with this scheme, you don't gain strong authentication of the remote party. A man-in-the-middle attack is still possible. But this does not prevent to use another cryptographic protocol to check afterwards that the remote user is really who he is pretending to be, like in ZRTP protocol.

But the danger nowadays is not man-in-the-middle-attack, is it continuous spying on servers like the USA's PRISM program. This opportunistic encryption scheme would allow the average user to use encryption. The emails would be stored encrypted on GMail, Microsoft or Yahoo servers and be in clear only on user's computer.

The WE programming idea

I think you now have understood this WE programming idea: implement such a plug-in doing opportunistic email encryption, e.g. as a Thunderbird plug-in. :-) All the libraries are there, like GnuPG's GnuPG Made Easy library to manage keys, encryption and authentication.

Anybody willing to take the challenge? ;-)

Sun 26 May 2013

Issues with distributions, not only a Debian specific problem

Some time ago, I blamed Debian for not looking enough at its users. Apparently, I'm not the only one to make such remarks. Ingo Molnar, a famous Linux kernel developer working for Red Hat made similar remarks. He even proposed some technical solutions.

Other solutions are already available. For example, for OCaml specific software, OPAM, an OCaml specific package system, seems very interesting to work around the freshness issue of OCaml Debian packages.

I'm not sure Ingo's answers or OPAM are THE answers, but they at least open interesting perspectives. Handling of security and licensing issues are very difficult and not yet solved. Distributions are making an heavy work of integration that is currently not done by anybody else.

It is nonetheless refreshing to see people thinking at those issues and proposing new solutions. I find Ingo's proposal of sandboxing, flat dependencing and not forcing people to upgrade very interesting. If you read French, you can also read this LinuxFR article that makes a small review of current proposals.

Tue 14 May 2013

High-level requirements for re-demexp

I recently spoke about the three main points to work on if one would start some re-engineering work on demexp. The first point was to start from some High-Level requirements. I have started to write those High-Level requirements. Let me know if you have comments or questions, either directly at dmentre@linux-france.org or through this post's comments.

Sun 12 May 2013

  • David Mentré

Analyse de Regards Citoyens sur la transparence démocratique

Très bonne analyse de Regards Citoyens sur la transparence démocratique qui cherche à poser « une frontière claire entre la vie privée et les informations importantes pour la démocratie ».

Rien que la transparence des votes parlementaires me semble une évidence. Nos élus sont censés nous représenter : connaître leur position en notre nom coule de source, non ? Les 9 autres propositions me semblent tout aussi intéressantes.

Une analyse de fond qui contre-balance un peu tout cette veine agitation politique. Pourra-t-on un jour revenir à plus de raison et moins d'émotionnel en politique ?

Sat 04 May 2013

Re-engineering demexp

demexpA long long time ago, I worked on demexp, a specific software made to support the Democratic Experience, a direct democracy initiative. demexp is programmed in OCaml language. The software contains both a server and a client. The client user interface is programmed with Gtk+. I also tried to make a web interface, without much success. The whole software is made in literate programming style. After several years of development, I was bored and stopped working on it. The software project never had a lot of momentum. The Democratic Experience political project is still alive, without much activity as far as I know.

After those years, one (at least me ;-) ) could wonder what failed in the project and how it could be improved, from a software engineering point of view. I see three main points to work on if one wanted to re-engineer demexp.

1. Start from a well-defined high-level specification. When we started demexp, we did not know exactly what kind of software we wanted. We discovered a new territory. Nowadays, I think we could have a much clearer picture of the requirements. As for any software, starting from clear requirements is mandatory. :-) So we should start from a set of high-level requirements, from which derive some low-level requirements for the software.

2. Design a modular architecture. I made demexp as a big monolithic software. The server part contained the network interface, the question and response database, the security model, the delegation system, etc. The client was containing a User Interface as well as code to authenticate the user and access the server. I now think I should have started from a much modular architecture, with several small programs, each one of them focused on a part of the system. Having several programs would force me to have clear interfaces between them, thus imposing a cleaner architecture. And if the task of developing all those programs is too big, at least I could focus on a sub-part of them to have a system with less functionalities, but at least working.

3. Use formal methods to verify some properties. Using formal method is not strictly required, but, well, it interests me :-) and I think formal methods could bring benefits to such a software. For example to prove that some properties of the system, e.g. the security model is correctly designed and implemented.

Moreover, I see some additional points to review in the original demexp design:

  1. Drop the literate programming approach. It is too costly to maintain (it is heart-braking to remove some code, but its even more difficult to remove some documentation). Some well defined documentation for the tricky parts or the global overview of the system would be enough I think.
  2. Traceability. From high-level specification to low-level one, down to code and its tests or proofs. As for safety critical software, maintaining traceability would bring a cleaner view of the system and it would help code reviews (one can dream! ;-) ) and show the software meets its goals.
  3. Focus on smaller goals. The overall objectives of demexp were very ambitious and thus difficult to reach. One should probably start from less ambitious goals, but try to make useful software in small steps.

Regarding the use of OCaml language, is was probably part of the raison why we never gain much contributions. But OCaml is so nice to use! I would probably keep that part... or not. I currently some other interesting and somewhat confidential languages to look at. :)

Thu 02 May 2013

Un faire part de naissance avec Scribus

Image faire part de naissanceVoici un exemple de faire part de naissance réalisé avec le logiciel de PAO (Publication Assistée par Ordinateur) Scribus. Je me suis librement inspiré de conceptions vu sur Internet, mais tout a été conçu à partir de zéro.

Le source Scribus est disponible (licence domaine public, faites ce que vous en voulez), ainsi qu'un exemple de fichier PDF produit.

Le fichier PDF a été utilisé sans problème chez un imprimeur. Conclusion : on peut faire un faire part de naissance libre avec du logiciel libre. :-)

Mon 08 Apr 2013

  • David Mentré

A new machine for this blog

I recently migrated this blog to a new machine and changed some configuration parameters. The new server is a bit less powerful also less expensive. :-)

The new URL of this blog is now blog.bentobako.org. All previous links should be correctly forwarded to and handled by the new web server.

For the technically inclined, I use Postfix as SMTP server, Dovecot as IMAP server, Nginx as web server and Dotclear as blog software. I am quite happy with Nginx which is light, well documented and of which configuration is easy to understand. I was using Lighttpd previously but dropped it because is was not well maintained (no new version for a long time) and not well documented.

My Dovecot and Postfix configuration allow virtual users (as many users as you like) with shared authentication between them (same login and password). I might publish my configuration if some are interested. Let me know!

Sun 27 Jan 2013

The failures of Debian (and its derivatives)

I am a long term Debian user. I have used Debian since probably the mid 90' up to now. I use latest Debian stable on my personal server or at work, and Ubuntu on my desktop and laptop machines at home. Using it does not mean I am entirely satisfied with it to say the least. I think the distribution is not making enough efforts for its users and I'll try to explain why. Of course, YYMV. ;-)

  • Debian packaging is made for developers, not users. Debian has too many packages. To take just one example, the OCaml compiler is packaged into the main compiler, its libraries, the native compiler and the byte code one, etc. Why so many packages? If I am an OCaml developer, I want all of them so why do I need to manually find (the naming is far from being obvious, at least for beginners) and install so many packages? I have heard of several reasons: it allows to factorise common parts between the different binary architectures, it allows to install the command line only parts without the graphics parts, it allows to install only what the user wants, etc. For me, those reasons are just plain wrong. We have more and more disk capacity on our machines so disk usage is no longer a limitation. The package should be able to dynamically activate the graphic parts automatically if the X server is available. And the factorisation of shared parts between Debian architectures should be done on the servers storing the packages, not trough the packaging system.
  • Debian has out-dated software. Debian Wheezy is about to be released and it will have GNOME 3.4. But GNOME 3.6 is already out and GNOME 3.8 is on its way. And I am taking GNOME as an example, it is the same issue for lot of software within Debian. I have heard this is for stability issues. But software packaged in Debian is already stable! It should take 10 or 15 days to integrate a new software into Debian stable, not months or even years (the time between successive stable releases). I acknowledge that some packages have complex interdependencies between each others. For example, when a new release of the OCaml compiler is out, one needs to recompile all OCaml packages. But this constraint is only for OCaml packages. Why should I wait for a new stable version of Debian to get the newly released OCaml compiler? For me this sounds just plain wrong.
  • Nobody uses plain Debian stable. Debian developers are using unstable. Debian users are using Debian stable, but enriched with backports because of out-dated software. Or derivatives like Ubuntu. The fact the Debian developers are not using what they recommend to users is bogus. I know they do that for technical reasons, but that should not be a good reason. Debian developers should use what they are providing to their users, except maybe for a few packages they are working on.
  • There are too many dependencies between packages. The dependency system of Debian is a nice piece of work, it was ahead of its time when it was created. But the use of dependencies has been perverted. The dependencies are manually computed (thus source of errors and bugs) and at the same time any software can write to about any part of the file system. Due to too many dependencies and lack of clean interfaces between sub-systems, the installation of a recent user software can trigger a ton of packages down to a new kernel or libc. Why is it so? I think the sub-systems of Debian (e.g. the X graphical infrastructure, the kernel and base C libraries, the OCaml system, ...) should be isolated the one from the others. It would allow them to be updated without waiting for the others. Having dependencies between 29,000 packages is just not scalable. It is even more true if those dependencies are manually computed.
  • Debian packaging is lacking automation. I am a developer. Debian packagers are developers. It always astonished me how much manual work should be done to make and maintain a Debian package. All developers know that if they want to be efficient, they need to automate their work as much as possible, so as to be able to focus their manpower on the complex parts. Everything should be automated in Debian packages, starting from a minimal description file. Automation should be the default (package creation, test, static analysis, ...), not the exception.
  • One cannot install simultaneously several versions of the same software. As a user or developer, I want to use the stable version of a piece of software and maybe the latest stable version that just have been released in order to do a few tests or check that my software still compiles with the new shiny compiler. Debian does not allow me to do that. And if I install a newer package, downgrading to the previous version is complex and error prone.
  • Debian is undocumented. I am not talking about the installation guide which is nice, I am talking about the modifications made to software by Debian developers. Doing modification on the "standard" (for the software) way of configuring or using it has always seemed suspect to me, even if I agree that having harmonized configuration is a real advantage (all configuration files in /etc for example). But all of those modifications should be documented in README.Debian file. To take an example, the last time I tried to install the dokuwiki Debian package, I was unable to configure it! The way to add new users had been changed compared to a regular dokuwiki (the web interface was disabled), and those changes were undocumented. It should be a release critical bug! Without proper documentation, the user cannot use the software. And, of course, the reason behind those changes should be questioned, even for security reasons (a very secure but unusable software is superfluous. Security is a trade-off).

So, why I am complaining? Why I do not become a Debian Developer, so I can fix it? Because a single developer is not going to change the root causes of those issues. They need a massive development effort, or at least a massive acknowledgement by the Debian Developers. And I don't have ready-made answers to those issues (even if I have some ideas to solve them).

Is the grass greener in the other fields? I don't think so, or at least I am not aware of it. I like Debian for is community approach, its focus on Free Software (even if it is sometimes imperfect) and the wide range of software packaged in it (the OCaml packages are numerous for example). I just hope that the whole Debian community will focus on more user related issues in the future.

page 1 / 13