Allman Professional Consulting, Inc.

Exceptional Project Management Tradecraft

Getting Fedora 20 Sound Working Again

September 17th, 2014

When I updated Fedora 20 some time (but many updates) ago my sound stopped working. I don’t exactly know which update caused the problem. I don’t play audio or watch video all that often on my laptop. I expected that a pending update would resolve the problem but after a few months I figure it’s not the Fedora software but something on my system. Time to roll up the sleeves and dig in. I used this page for some excellent background, information, suggestions, etc.:

After a few days of work here’s my step-by-step to resolve the problem.

First, I listed what sound card and device IDs I have by running

aplay -l

Here’s what mine looked like when I started:

**** List of PLAYBACK Hardware Devices ****
card 0: HDMI [HDA Intel HDMI], device 3: HDMI 0 [HDMI 0]
 Subdevices: 1/1
 Subdevice #0: subdevice #0
card 0: HDMI [HDA Intel HDMI], device 7: HDMI 1 [HDMI 1]
 Subdevices: 1/1
 Subdevice #0: subdevice #0
card 0: HDMI [HDA Intel HDMI], device 8: HDMI 2 [HDMI 2]
 Subdevices: 1/1
 Subdevice #0: subdevice #0
card 1: PCH [HDA Intel PCH], device 0: VT1802 Analog [VT1802 Analog]
 Subdevices: 0/1
 Subdevice #0: subdevice #0
card 1: PCH [HDA Intel PCH], device 2: VT1802 Alt Analog [VT1802 Alt Analog]
 Subdevices: 1/1
 Subdevice #0: subdevice #0

Next I ran alsamixer to look at each device. Hit <F6> to select the device to check. I checked “default” and the two devices that you see in the list from “aplay -l” above, namely “HDA Intel PCH” and “HDA Intel HDMI.” After selecting the sound card I hit <F5> and checked each item such as “Master,” “Speaker,” “PCH,” etc. to be sure they’re not muted and set to at least 50. If an item can be muted then you’ll see either “MM” or “OO” at the bottom of the vertical level indicator. If any are muted (=”MM”) type “m” to toggle the item to unmute. The item names are a little hard to read at the base of each but they also display on the fourth line down from the top left.

I played around with the “speaker-test” program and played an audio file (a .wav file) on each card/device to see which if any played sound. Here’s the command:

aplay -D plughw:CARD,DEVICE WAV_FILE

I tried each card/device combination: 0,3, 0,7, 0,8, 1,0 and 1,2. Here’s exactly what I typed for the first card/device:

aplay -D plughw:0,3 /usr/share/skype/sounds/SkypeLogin.wav

Sound played from card/device 1.0. No sound from anything else. I didn’t expect anything when I tried the HDMI card devices since I don’t have any HDMI hardware for the sound card to send audio to.

I want the sound to be played on the the PCH card and device 0. I would also like it to always be the same card number and not change after an update. I decided to make it card 0. To set this up I first ran this command to determine what the driver is for each card:

cat /proc/asound/modules

Mine looks like:

 0 snd_hda_intel
 1 snd_hda_intel

OK, the driver for both sound cards is “snd_hda_intel.” Next I ran:

lspci -nn | grep -i audio

Here’s what my system looks like:

00:03.0 Audio device [0403]: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor HD Audio Controller [8086:0c0c] (rev 06)
00:1b.0 Audio device [0403]: Intel Corporation 8 Series/C220 Series Chipset High Definition Audio Controller [8086:8c20] (rev 05)

Frankly I can’t tell which one is the HDMI card but it really doesn’t matter yet. The problem is to not have the card numbers move around so that one day the first card is card 0 and after an update it’s then card 1. The way to do that is to add two lines (one for each card) to /etc/modprobe.d/alsa-base.conf to specify the “index” (= card number I think) like this:

options snd-hda-intel index=0 model=auto vid=8086 pid=8c20
options snd-hda-intel index=1 model=auto vid=8086 pid=0c0c

The values for “vid” and “pid” are from the output of the “lspci -nn | grep -i audio” command. Right about at the end of line the “vid” and “pid” are surrounded by braces and the format is “[vid:pid].”

Lastly, tell alsa that you want device 0 on card 0 to be the default pcm sound device. I set up the file /etc/asound.conf as follows:

# Place your global alsa-lib configuration here...
pcm.!default {
 type hw
 card 0
 device 0

At this point everything is about set. The only uncertainty is which of the two lines in /etc/modprobe.d/alsa-base.conf is the PCH card. I’m not sure if I have the index values correct. I might need to switch them.

I rebooted the system, logged back in and ran the “aplay -l” command again. The results are:

**** List of PLAYBACK Hardware Devices ****
card 0: PCH [HDA Intel PCH], device 0: VT1802 Analog [VT1802 Analog]
 Subdevices: 0/1
 Subdevice #0: subdevice #0
card 0: PCH [HDA Intel PCH], device 2: VT1802 Alt Analog [VT1802 Alt Analog]
 Subdevices: 1/1
 Subdevice #0: subdevice #0
card 1: HDMI [HDA Intel HDMI], device 3: HDMI 0 [HDMI 0]
 Subdevices: 1/1
 Subdevice #0: subdevice #0
card 1: HDMI [HDA Intel HDMI], device 7: HDMI 1 [HDMI 1]
 Subdevices: 1/1
 Subdevice #0: subdevice #0
card 1: HDMI [HDA Intel HDMI], device 8: HDMI 2 [HDMI 2]
 Subdevices: 1/1
 Subdevice #0: subdevice #0

If card 0 was the HDMI card and card 1 was the PCH card I’d reverse the “index=” values in the two lines in the file /etc/modprobe.d/alsa-base.conf.

I’m done, right? Not at all. For starters I needed to run alsamixer again to inspect the default and PCH cards. I checked again for anything that outputs sound to verify that it’s not muted and/or at a very low setting. My speaker item was not muted but it was set to 0. No clue why.

Now to verify that pulseaudio is configured to talk to the right sink (=output device). I listed out the directory /dev/snd. Mine looks like this:

drwxr-xr-x. 2 root root 80 Jul 20 10:31 by-path
crw-rw----+ 1 root audio 116, 7 Jul 20 10:31 controlC0
crw-rw----+ 1 root audio 116, 2 Jul 20 10:31 controlC1
crw-rw----+ 1 root audio 116, 11 Jul 20 10:31 hwC0D0
crw-rw----+ 1 root audio 116, 6 Jul 20 10:31 hwC1D0
crw-rw----+ 1 root audio 116, 9 Jul 20 10:31 pcmC0D0c
crw-rw----+ 1 root audio 116, 8 Jul 20 10:31 pcmC0D0p
crw-rw----+ 1 root audio 116, 10 Jul 20 10:31 pcmC0D2p
crw-rw----+ 1 root audio 116, 3 Jul 20 10:31 pcmC1D3p
crw-rw----+ 1 root audio 116, 4 Jul 20 10:31 pcmC1D7p
crw-rw----+ 1 root audio 116, 5 Jul 20 10:31 pcmC1D8p
crw-rw----+ 1 root audio 116, 1 Jul 20 10:31 seq
crw-rw----+ 1 root audio 116, 33 Jul 20 10:31 timer

Next I listed out /dev/snd/by-path:

lrwxrwxrwx. 1 root root 12 Jul 20 10:31 pci-0000:00:03.0 -> ../controlC1
lrwxrwxrwx. 1 root root 12 Jul 20 10:31 pci-0000:00:1b.0 -> ../controlC0

From the “aplay -l” output just above I know that PCH is card 0 (as I wanted it to be) so pulseaudio should have it’s sink be device “pci-0000:00:1b.0” since that’s a link to control0 (control device for card 0).

Now it’s time to check pulseaudio. I ran “Pulseaudio Manager” (it’s in my Multimedia menu on KDE) and selected the “Server Information” tab. The default sink should be “alsa_output.<DEV>.<Something>” where “<DEV>” for me is the “pci-0000:00:1b.0” value. On my system the “<Something>” part says “analog-stereo.” Next I clicked on the “Devices” tab, clicked on the “alsa_output.pci-0000:00:1b.0.analog-stereo” line and clicked on the “Properties” button at the lower right. There’s a voumne control there as well. I clicked “Reset” to set it to 100%. I then clicked on the “Go to Monitor Source” button below the volume slider and set the volume there to something reasonable as well. That’s it for Pulseaudio Manager.

At this point I launched VLC, opened a video file (a TED talk) and sound poured forth from the speakers.

One last note. Now that I have sound basically working the “speaker-test” program errors out telling me that the device is busy:

Playback device is plughw:0,0
Stream parameters are 48000Hz, S16_LE, 1 channels
Using 16 octaves of pink noise
Playback open error: -16,Device or resource busy

I suspect that it’s because pulseaudio has it open.

Good luck if you’re working a sound issue. I’m no expert but if I can help just send me an e-mail and I’ll try to offer a suggestion or two.

Some of What I Learned at MicroConf 2013

May 3rd, 2013

Well, MicroConf 2013 is in the past.  It always goes by too fast.

After the Tuesday wrap-up Rob, Mike and a few others grabbed some pizza.  While we were talking Rob asked me what I got out of the conference.  To save my life I couldn’t remember my mental list.

When I arrived at the airport (took the red-eye back to Boston) I started putting the list together.  Rob wanted us to have at least three takeaways from the conference.  I stopped at 10.  I bet I could put down at least 10 more.

  • Teaching is the best form of marketing.
  • Rob Walling can’t tell a joke to save his life.
  • Be brave and do the work to build your e-mail list
  • Website delivery speed matters. Needs to be < 3 to 4 seconds.
  • Visitors will size up your offering in something like five seconds. Design your home page to communicate that fast.
  • Strive to increase trust and lower friction.
  • AdWords: play by the rules and you have little to worry about from Google.
  • Mike Taber is the “Drink Fairy.”
  • You must clearly understand your customer acquisition costs and lifetime value (LTV).
  • MicroConf is a great conference. I’d like to see more alumni next year.
  • Get out and talk to customers. We all know this but we heard it again from presenter experiences over and over and over.
  • You’re almost certain to fail.  Get over it.  Keep going.  Learn.


NASA Parallel Distributed Processing Project

July 28th, 2012

PDP: A paradigm shift for NASA Space Shuttle Flight Design

STS 47 Launch

During my time working on the Space Shuttle I developed at NASA/Johnson Space Center a Parallel Distributed Processing (PDP) package to distribute tasks such as Space Shuttle trajectory simulations across the Unix workstation network we had at Rockwell Space Operations. The basic idea is very simple (as many people have told me): find an idle workstation somewhere on the network and run a command there instead of on the user’s workstation.

Although it sounds straightforward, there are many issues and problems to be dealt with. Some examples:

  • What exactly is an “idle” workstation?
  • How to keep track of when a remote (distributed) task has finished.
  • What do you do if a remote workstation crashes? How can you tell?
  • How do you handle intermittent network connectivity problems?
  • What do you do if someone logs onto a previously-idle workstation that you’ve distributed a task to?

The package was developed using Perl and C: Perl for the main tasks, and C for a daemon process and to use remote procedure calls (RPC). The project evolved from an idea to a simple script, then to a more sophisticated script, then to a fully-functional prototype. The estimated cost savings NASA realized from using the PDP software is about $40+ million dollars.

NASA apparently thought the idea, and the increased efficiency, was worthwhile and awarded me a NASA New Technology Award. I also received a Technology Utilization Award from Rockwell. The IEEE also invited me to make a presentation at a local chapter meeting.

Click the left button below to read the slides for the IEEE presentation. This presentation also contains results (using the prototype) from a project for Space Shuttle Flight Software upgrades where PDP reduced the compute time required from 378 days to under 12. A write-up summarizing the project was also published in NASA Technology Review magazine–click on the right button to read the article.

Sequential Simplex Optimization

July 28th, 2012


There are many different optimization algorithms and methods. The goal of an optimization algorithm is to analyze the response of a system by varying a set of inputs (“factors”), and determining the set of factors which yields the “best” responses. To a certain extent the choice of what method to employ depends on the system being optimized. There is no one “best” method, and therefore it is essential to have several in your toolbox to use on various problems.

In most cases the input factors and responses are subject to constraints. For example, determining the optimum route and speed to fly a plane is subject to constraints such as air traffic restrictions, fuel capacity, weather, and airplane structural limitations. Optimizing the operations of a manufacturing plant is subject to constraints such as environmental regulations, safety limits of machinery, and raw materials costs. How easy it is to incorporate constraints is one criteria to consider when choosing an optimization algorithm.

This paper describes the Sequential Simplex Optimization algorithm, how it works, some of the issues to keep in mind when considering using it, etc. I proposed using Simplex optimization for Space Shuttle Flight Design and Dynamics (FDD) and I later used Simplex to optimize the fuel flow for the Interim Control Module (ICM) Propulsion Subsystem simulator/trainer for the International Space Station. The ICM system is diagrammed above. The station problem was rather large (26-dimensional) and there were many challenges in finding the solution. I wrote the algorithm, including constraint and vehicle simulation execution, in Python. It required some trial-and-error testing and adjusting, but in the end it worked.

Sequential Simplex Optimization Presentation (scanned pdf)

Optimized Space Shuttle 1st Stage Guidance Targets

July 28th, 2012

This project researched the application of Sequential Simplex Optimization and Experimental Design concepts to the problem of optimizing some of the target values used in the Space Shuttle first stage guidance subsystem. Shuttle first stage employs “open-loop” guidance, essentially flying an attitude profile based on current speed. I first found the optimum targeting values and then modeled the response surface area around the optimum. Modeling the area (the response surface) around the optimum is critical because engineers must know how performance is impacted by minor targeting variations.

Optimized Space Shuttle First Stage Guidance Targets paper (pdf)

Optimum Space Station Orbit Inclination for Space Shuttle Support Missions

July 28th, 2012

Orbital Inertial System

When the Space Shuttle launches due-east from the Kennedy Space Center the inclination of the vehicle’s orbit is 28.45 degrees. This is the same as the Space Center’s latitude. The “orbital inclination” is nothing more than the angle between the orbit and the equator, i.e., how “inclined” or “tilted” the orbit is. If the orbit stays right over the equator then the inclination is zero and if the orbit passed over the poles then the inclination is 90 degrees.

What this project investigated was a small increase in the orbital inclination for Space Station Freedom, and subsequently all Shuttle missions to the Station. We looked into the change because of the constraints imposed on launching the Shuttle due to where the External Tank re-entered the atmosphere and the debris landed in the Pacific. The goal was to find ways of increasing the payload capacity of flights to the Station. Since the cost to deliver payload to orbit is on the order of $10,000 per pound, and obviously there were going to be many missions to the Station, anything that could increase the delivery capacity could save NASA quite a bit of money.

What we found was that we saved at least 700 pounds of propellant if we increased the Station orbital inclination slightly, from 28.45 degrees to 28.80 degrees. All the details are in the presentation and paper below. If you do the math the cost savings could’ve been on the order of $100 million. Here’s where that figure comes from:

  • Assume a minimum of 15 Shuttle station assembly flights.
  • Savings per flight is at least 700 pounds.
  • Cost per pound to orbit is on the order of $10,000
  • $10,000 x 700 x 15 = $105,000,000.

NASA never realized the savings because the SSF project was canceled and replaced with the International Space Station (ISS) project. The orbital inclination for the ISS is about 51.64 degrees, which is the latitude of the Baikonur Cosmodrome. Oh, well–at least it was a lot of fun to research!

Presentation Slides (scanned PDF) Companion Paper(scanned PDF)

Introduction to the C Programming Language

July 28th, 2012

Introduction to the C Programming Language

Snippet of a C Program

As many books do, this started out as a set of notes for a class. I taught several C classes for the staff at Rockwell Space Operations in Houston when I worked on the Space Shuttle Program. What I wanted to do was to drive home that the engineers, who up to this point had been using Fortran, actually knew many of the concepts and processes already. They weren’t starting from scratch. A good bit of what they needed to know was more along the lines of “how do I write that in C.”

The document below is the first several chapters, then a few with just the section headings, and a few of the last chapters removed. I can’t give away the entire book! I never looked into getting it published because at the time there we so many C books on the market I couldn’t see what one more would add. Also, my book was targeted to a very specific audience. Some of the later chapters cover issues and problems specific to writing scientific code, developing large-scale trajectory simulations, and other topics important to Space Shuttle engineers. A few examples:

  • How to handle passing invalid values to mathematical functions, e.g., log(-1). Space vehicle software just can’t pass back NaN (=”not a number”). How do systems that cannot afford to crash handle these situations?
  • When a complex calculation yields a value that is essentially, say, zero, it usually isn’t exactly zero. A value like 10^-10 meters is a very small distance (it’s an angstrom, actually) but it’s not zero. If you test if it’s equal to zero it will fail. When is a value “near enough” to another value?

The book was written using LaTeX. A very dear friend and I wrote a graduate textbook titled Modern Astrodynamics and that project was my first exposure to LaTeX. It works. You can solve just about any typesetting problem you want with it. It took some time to learn, but it was well worth the investment.

Introduction to the C Programming Language (pdf)

Multi-Stage Monte Carlo Optimization

July 28th, 2012

Multi-Stage Monte Carlo Optimization

Scatter Plot

Monte Carlo optimization has been a widely used optimization method for many years. It’s very easy to understand: For a set of factors (independent variables), randomly select values for each factor to make a set of inputs to the system and test the system response for each set. For example, referring to the picture, we would pick different (x,y) value pairs and test the system response.

Monte Carlo optimization has several advantages:

  • It will not be distracted by high variability in the system response since it doesn’t care what the response is.
  • You do not need to have much of an understanding of the system response. You just define the range of values for each input and let the method randomly pick test sets.

Monte Carlo optimization also has several disadvantages:

  • The method does not learn from test results as it runs. An area where the optimum probably is located may become clear after a certain number of tests but the method won’t know it and will just blindly keep testing random points.
  • You often need a large sample size of points to find the optimum, and if each test is expensive (e.g., an airplane flight test or a complex manufacturing process) then the costs of using Monte Carlo would be prohibitive.

What would improve the method is a way to break up the sampling so that more points could be run in the area where the optimum value is more likely to be. This is what Multi-Stage Monte Carlo does, and I modified the method to adjust the sampling to handle cases where there are many local optima in the response surface. Today Monte Carlo optimization is not used as widely as before but there are times when it’s still the best method, and the Multi-Stage and Enhanced Multi-Stage algorithms can help to lessen some of the disadvantages.

Multi-Stage Monte Carlo Optimization Presentation (scanned pdf)

Project Management Challenges in the Space Shuttle Flight Design Program

July 28th, 2012

Many of the same competing project forces which are common in the business world are also well represented in the Shuttle Program. The Space Shuttle has always been a work-in-progress, constantly being reviewed, revised, studied, and improved. How is this constant change managed? Projects compete for scarce resources (knowledgeable staff, funding, time, etc.). Complicating the management of these competing forces is the nature of the program: America’s National Space Transportation System. Although the day-to-day work becomes as routine as it does in most environments, what we all work on everyday (designing Space Shuttle missions) requires a special attention and thoroughness. This presentation will discuss Space Shuttle flight design and the challenges, risks, and opportunities unique to driving projects in the shadow of Apollo and the rarified air of mission control.

PM Challenges in the Space Shuttle FDD Program Presentation (pdf)