Tag Archive

amateur astronomy awk bash b[e] supergiant cartoon conference convert evolved star exoplanet fedora figaro fits fun galaxy iraf large magellanic cloud latex linux lmc machine learning magellanic clouds massive star matplotlib meteor mypaper paper peblo photometry planet pro-am pyraf python red supergiant scisoft skinakas observatory small magellanic cloud smc spectroscopy starlink talk ubuntu university of crete video x-ray yellow hypergiant

Installing IRAF/PyRAF from the (Debian) repository directly

IRAF has become lately available through Debian‘s repositories. So I was curious to see how it works. But I was missing the right moment. About a month ago, I finally found the proper motivation so I went on to install it. As always … there were a few, minor, problems. Thanks to John K. we were able to resolve them! What follows is a small guide of what I faced during the installation (on a Debian 10/Buster).

Instead of previous (relatively) easy solutions to install IRAF now you can just do:

sudo apt-get install iraf

And the magic happens! It is typical to “mkiraf” then to build the necessary login.cl file. And that’s the very first problem as there is no such a thing (try mkiraf and you will get nothing).

Fortunately, mkiraf is not that necessary as the login.cl is just an ascii file with some default parameters set. We can easily copy such a file from another machine and we can put it here (under a proper directory, typically I create one under home, such as /home/user/iraf/). In all cases, what we need to change is the first few lines:
set home = "/home/user/iraf/"
set imdir = "home$images"
set cache = "home$cache"
set uparm = "home$uparm/"
set userid = "user"

where you replace the user with your username (or you put the appropriate path in the first line, and your username in the last one).

Another key feature is the shell used (typically xterm, xgterm, …). The original login.cl file will have already something (as the shell type is asked when IRAF is set with mkiraf), similar to:
# Set the terminal type. We assume the user has defined this correctly
# when issuing the MKIRAF and no longer key off the unix TERM to set a
# default.
if (access (".hushiraf") == no)
print "setting terminal type to 'xterm' ..."
stty xterm

Either you do not need to do anything (if you have xterm installed already) or you place the shell you are using.
Do not edit anything else from the rest of the file as these are the default IRAF parameters.

[NOTE: copy a fresh login.cl after running mkiraf, else you may end up with a version of a file with modified parameters set by the owner – although you should be always careful to ‘unlearn’ the commands and check the parameters used!)

Then you should be ready to use IRAF! Typing the usual “cl” or “ecl” unfortunately doesn’t work1 (and it is a bit frightening at the beginning). But with this installation you just do “irafcl” and the normal ecl terminal starts immediately (nice!).

However, ecl is not that convenient, so we urgently needPyRAF. This can be easily installed through pip:

pip install pyraf
pip3 install pyraf

(for Python 2 and 3, respectively). Now even though pip instals all files under separate dirs it builds only one binary under ~/.local/bin/pyraf. So if you installed PyRAF for python2 first, you will see it running properly when using python2. If you add also pip3 version then you will notice that only PyRAF for python3 is available. Frustrating? You bet!

After some exploration we figured out that PyRAF creates only one binary so it overwrites the previous one. Although we tried to change the name and see if it can run individually this didn’t work (for whatever reason … probably bound to something else?). So there is no way (I wonder?) that you can have PyRAF for both Python versions. [Before asking why to keep both versions, keep in mind that somebody may want to use scripts from older versions! ]

The (not at all convenient) solution found so far is to:

1. install first PyRAF for Python v2
2. rename the pyraf binary (~/.local/bin/pyraf) to something else (e.g. pyraf-v2, this is only a back up as it doesn’t work even if you call it explicitly)
3. install PyRAF for Python v3
4. keep the pyraf binary as it is (for use with Python 3)
5. change the names accordingly if you want to use the older version

[There is another package python3-pyraf that I didn’t test2, as I tend to keep as clean as possible Python’s installations by using pip only.]

However, even after that PyRAF may still not run properly. For example:
user@mymachine:~$ pyraf

Your “iraf” and “IRAFARCH” environment variables are not defined and could not
be determined from /usr/local/bin/cl. These are needed to find IRAF tasks.
Before starting pyraf, define them by doing (for example):

setenv iraf /iraf/iraf/
setenv IRAFARCH linux

at the Unix command line. Actual values will depend on your IRAF installation, and they are set during the IRAF user installation (see iraf.net), or via Ureka installation (see http://ssb.stsci.edu/ureka). Also be sure to run the “mkiraf” command to create a logion.cl (http://www.google.com/search?q=mkiraf).

What is missing now is to define these environment variables properly. You can either type the following lines in the terminal or add them to your .bashrc file, so that PyRAF is available at all times:

export iraf='/usr/lib/iraf/'
export IRAFARCH='linux'

Another working IRAF/PyRAF environment has been deployed!

17/7/2019 Update: Check olebole’s comment below regarding points 1 and 2.

Fixing GPG error “NO_PUBKEY”

In Debian, Ubuntu and similar distros that use the APT (Advanced Package Tool – which is a set of tools for managing Debian packages / applications), to update the system you need to run:

sudo apt update

This will read all repositories (as they are listed in /etc/apt/sources.list and under /etc/apt/sources.list.d/) and checks if everything is correct (e.g. if the links are working and these sites / repositories are trusted sources to install from). So by doing this in my system I got the following:

Hit:1 https://repo.skype.com/deb stable InRelease
Hit:2 http://security.debian.org/debian-security buster/updates InRelease
Hit:3 http://deb.debian.org/debian buster InRelease
Hit:4 http://deb.debian.org/debian buster-updates InRelease
Err:1 https://repo.skype.com/deb stable InRelease
The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 1F3045A5DF7587C3
Reading package lists... Done
Building dependency tree
Reading state information... Done
All packages are up to date.
W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: https://repo.skype.com/deb stable InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 1F3045A5DF7587C3
W: Failed to fetch https://repo.skype.com/deb/dists/stable/InRelease The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 1F3045A5DF7587C3
W: Some index files failed to download. They have been ignored, or old ones used instead.

In this case there is an error with respect to Skype. The systems does not have the public key for this package, so it complains and prevents the system from downloading something which is not secure (and something that we want!).
At the same time this means that I didn’t do something correctly when installing Skype (and to be honest … I do not remember what I did!). Anyway, the proper procedure is a two-step process:

1. Add the repository under the /etc/apt/sources.list.d/ as a separate file,e.g. by:

echo "deb [arch=amd64] https://repo.skype.com/deb stable main" | sudo tee /etc/apt/sources.list.d/skype-stable.list

(adding the repository to /etc/apt/sources.list is actually equivalent – the only difference being that you need to edit that file while it is more convenient, especially for automated scripts, to create a new file under sources.list.d/)

2. Then, the second step is to download the GPG public key that verifies the repository. To do that we can simply:

sudo apt-key adv --fetch-keys https://repo.skype.com/data/SKYPE-GPG-KEY

and we will get:

Executing: /tmp/apt-key-gpghome.fD2Z003jib/gpg.1.sh --fetch-keys https://repo.skype.com/data/SKYPE-GPG-KEY
gpg: requesting key from 'https://repo.skype.com/data/SKYPE-GPG-KEY'
gpg: key 1F3045A5DF7587C3: public key "Skype Linux Client Repository " imported
gpg: Total number processed: 1
gpg: imported: 1

(This is similar or better of doing:

wget URL -O - | apt-key add -


curl URL | apt-key add ).

This adds the GPG key in the /etc/apt/trusted.gpg file, and now if we try again to update the system we will see no error or warning.

Hint: to see all the contents of the trusted.gpg file just type: apt-key list !

Installing Debian on Lenovo Thinkpad Carbon X-1 (Gen 6)

It is exciting to have a new machine, such the Lenovo Thinkpad X-1, but the process for setting it up can be a bit tedious. So this is a small guide what worked or didn’t through my small experience in installing Debian linux. [Just for the fun of it I proceeded with the installation of Win which was really easy with all vocal commands! But they wouldn’t live long … ]

BIOS boot

I started by downloading the net/CD image for Debian 9 (‘stretch’) and putting it to a USB key. I plugged the key at the laptop and started the booting process. First problem: even though it could see the USB and it showed the Debian distro it wouldn’t start. I couldn’t figure out at all why up to the point that I noticed that the new hardware has new BIOS technology called UEFI (yes … I hadn’t installed anything for a looong time!). I went to BIOS > Startup > UEFI/Legacy Boot which was ‘UEFI only’ and I changed the CSM Support to ‘Yes’. But this alone was not enough. So, I changed the UEFI/Legacy Boot from ‘UEFI only’ to ‘Both’ [where I think it should be also to put ‘Legacy only’ if you plan to run on Linux only, but if Win are necessary then leave both].

Debian installation

Then the USB key would finally load and the installation could start. With the graphical installation of Debian the screen goes blank after some time of inactivity, which you can correct with a small movement of the mouse. But since I didn’t have any mouse connected I though that something was wrong. It took me a few iterations before I understand that, which I find ridiculous when you install an operating system. I think that the monitor should be on at all times! The installation went on without any other issues. After logging in the new system, I updated all packages.

Trackpoint/Trackpad issue

At the beginning the trackpad (touchpad) could work with some lag. The trackpoint didn’t work at all but I didn’t pay any serious attention to that. I installed the wifi driver (firmware-iwlwifi, for Intel chips) and after that the touchpad wouldn’t work at all. This is a known problem and many solutions are offered (updating to latest kernel 4.18 and to Debian testing didn’t work). However (and after some iterations of the installation process) I finally went to BIOS > Config > Keyboard/Mouse > Trackpoint and I ‘Disabled’ it. That way the touchpad works without any issue. [At this point it is not so urgent for me to fix this, but I will come back in the future].

Suspend mode

There is a know issue that certain BIOS version have serious problems to recover when suspending the laptop. I learned that the hard way: after suspending the laptop when I was trying to put my password over and over again, as every time it would accept only a few characters before it accepted that raised errors of incorrect password. I had to brutally shut it down (pressing the power button) but for whatever reason the x-server would start. So I had only text access and I couldn’t restarted it (of course I may have missed some more appropriate approaches). Anyway, I re-install Debian from scratch. In order to solve that I needed to update the BIOS.

BIOS update with fwupd

There is a new easy way to update firmware and BIOS through Linux by using the fwupd. After the fresh installation I set this as my first target before I install anything else. fwupd was at Debian repository so it was simply to install it. And the instruction are really easy to follow. Only do ‘fwupdmgr refresh’ to get the latest metadata for you firmware. But at this point it was failing to connect to the webpage (error message: “Failed to download https://s3.amazonaws.com/lvfsbucket/downloads/firmware.xml.gz.asc: Not Found”). The issue (Debian bug #912414) was simply a wrong site, which was easy to fix by going to the config file “/etc/fwupd.conf” and replace: “DownloadURI=https://s3.amazonaws.com/lvfsbucket/downloads/firmware.xml.gz” with the new site: “DownloadURI=https://cdn.fwupd.org/downloads/firmware.xml.gz”. That should do the work, right? NO! Because the version of fwupd installed in Debian 9 is 0.7 and according to Richard Hughsie: “LVFS will block old versions of fwupd for some firmware […] The ability to restrict firmware to specific versions of fwupd and the existing firmware version was added to fwupd in version 0.8.0. This functionality was added so that you could prevent the firmware being deployed if the upgrade was going to fail, either because: i. The old version of fwupd did not support the new hardware quirks, ii. If the upgraded-from firmware had broken upgrade functionality. Then, let’s upgrade fwupd by upgrading (at the same time it would be nice to have a fully fresh new system) and I switched the repositories to Debian 10 (testing, named ‘buster’), upgrading also the Linux kernel from 4.9 to 4.18. And indeed the fwupd now was working! Almost…

BIOS update with USB key

I run fwupd to get the latest BIOS version but it refused. The installed version was 1.25 while only those >1.27 are upgradable with fwupd. Now the only solution is the classic one, i.e. download the last Thinkpad BIOS update (bootable, for Windows) and install it directly from a USB. After having downloaded the .iso image I followed the instructions by Vivek Gite (with most important part the video showing what to do during the actual BIOS installation).

[In brief, using the El Torito boot image extractor (debian package name: genisoimage) do the following:
geteltorito -o bios.img n23ur13w.iso
sudo dd if=bios.img of=/dev/sdb1 bs=1M

Take care that you know exactly which device is the USB key you are going to write to, else you are going to delete data!
Then reboot and enter BIOS and boot from the USB. Select option 3 to verify that you have the right model (in case you press something and you cannot cancel just retype the model number) and then press 2 to start the actual BIOS update. This will run some things now and then when you reboot it will continue before it shows the booting screen.]

Crossing fingers and … booting again! phew… everything is working and indeed I have now the latest version (1.34).

As a (much) later version of the BIOS it should have fixed the issue with suspending mode. Did it work? So far I have experienced any critical issues. What I did noticed though is that that I could feel it a bit hot when suspending. I was going through the BIOS setting and I discovered that there is an option now with respect to the sleep state (I don’t know if it was present in the previous version): Config > Power > Sleep State, which has two options ‘Windows 10’ and ‘Linux’. I obviously picked the last one to optimize the performance and I think it is working now.

Python pip

Finally, the time that I would actually install what I need to start working has arrived! Since Python2.x and 3.x exist I was going to use pip only to install all necessary packages. I installed pip for both Python versions from the distro repositories. Checking the pip site I saw the following recommendation: “Ensure pip, setuptools, and wheel are up to date, by doing: python -m pip install –upgrade pip setuptools wheel”.

Well, first of all … DON’T DO IT! I missed the obvious warning sign above:

Be cautious if you’re using a Python install that’s managed by your operating system or another package manager. get-pip.py does not coordinate with those tools, and may leave your system in an inconsistent state. You can use python get-pip.py –prefix=/usr/local/ to install in /usr/local which is designed for locally-installed software.

By default the pip version provided by the Debian repositories is 9.x. When I did this upgrade I got the latest version of 18.x and I went on to install numpy, scipy, matplotlib. But when I started python to try out the installation nothing was working! Even worse:

pip install astropy
Traceback (most recent call last):
File "/usr/bin/pip3", line 9, in from pip import main
ImportError: cannot import name 'main'

What is going on now? Apparently is it not a good idea to upgrade with pip a system installation of pip, as then pip gets confused of which version to use exactly and, even worse, some scripts managed by apt may break (see e.g. the discussion in github issues #5447 and #5221). So the two important advice here are:
1. upgrade/install under user (with –user)
2. avoid running pip with sudo (which may affect root accessed files)

I found various ways to fix this over the internet, but I opted for a more … brutal way. I noticed that when installing Python packages from the repositories of the distribution everything is installed under the /usr/lib/python*.*/dist-packages/ while pip puts everything under /home/user/.local/lib/site-packages/. So, the idea was to remove (manually delete) everything under these directories. Then I could re-install pip and the Python packages and it should be clean from all errors. I did that but I continued to get errors, but probably that was because I didn’t re-source the terminal (so paths to the previous packages and scripts were still active). After a reboot I found everything working fine. [One note: after a fresh installation everything is new, such as the content of $PATH. When I installed jupyter and tried to run it I got an error “command not found”, obviously because I forgot to update the $PATH to include the ~/.local/bin/ directory, where binaries from pip are stored.]


Other resources

The above is a personal (certainly not the wisest) experience to install Debian 9/10 (stable/testing) to Thinkpad X-1. There are also more thorough and knowledgeable guides out there (which I have to check also for other issues), such as:
–> Installing Debian On Thinkpad X1 Carbon 5th Gen (previous generation)
–> Lenovo ThinkPad X1 Carbon (Gen 6) by Arch linux

Merging catalogs and creating unique identifier in bash

For a certain project I had created a number of photometric catalogs, each one corresponding to a specific observing field. I would like to construct the final (merged) one but for this I needed to add a unique source identifier at the beginning of each row. I decided to create a F#-**** tag for each source with “F#” corresponding to the field id and **** to a counter for each source per field. The final command was:

for i in {1,2,4,5,6,7,8,9,10,11,12,13,16};do echo F$i.matches.all.cat;awk -v id="$i" 'FNR>1 {print "F"id"-"1+c++, $0}' F$i.matches.all.cat >> results.tmp; done

So the command reads all the specific numbers for which a catalog with a filename of F*.matches.all.cat exists. The number of each field ($i) is parsed as an external variable (id) to awk which places it as the unique identifier “Fid-counter” with the incremental “counter” (1+c++) corresponding actually to the number of row (1+counter to begin from 1 instead of 0 – FNR avoids the first line of each catalog which is a column description). All results are written appended to the output file results.tmp (created automatically when non-existing).

Then, we can use sed to add the header:

sed -i '1i\#SourceID ...' results.tmp

Shared folder in linux

Suppose that we want to create a folder in which multiple users will have full access (which means that they will be able to read, write, and execute).

  1. We create the folder (under ‘/’ in this example, although it can be anywhere else):

    sudo mkdir test

    which has the following permissions:

    drwxr-xr-x 2 root root 4096 Oct 23 13:07 test

  2. We make a new group of users (“testers”):

    sudo groupadd testers

    and then we add to this group all users we want to have access:

    sudo usermod -a -G testers testuser

    (id testuser will return the groups that the testuser belongs to)

  3. We change the permissions of the shared folder (this is more useful if we have copied or moved a folder with some contents already):

    sudo chgrp -R testers /test
    sudo chmod -R g+rwx /test

    so as every user in the groups “testers” can access the folder.
    However, when a user creates a file, it is assigned with the permissions set by the user and its primary group, e.g.

    -rw-rw-r-- 1 testuser testuser 0 Oct 23 14:21 tempfile

  4. In order to automatically assign the permissions of the group when creating files and folders in the shared folder, we assign the “s” option (for setting the group id – SGID):

    chmod -R g+s test

    For example:

    drwxrwsr-x 2 testuser testers 4096 Oct 23 14:34 newdir
    -rw-rw-r-- 1 testuser testers 0 Oct 23 14:35 newfile

    Now everyone in the “testers” groups can create, edit, delete, etc. the contents of this folder.

Useful articles:
[1] “How can I give write-access of a folder to all users in linux?”, Superuser.com, retrieved on Oct 23, 2014
[2] “Using SGID to Control Group Ownership of Directories”, Yale University Library Workstation Support / Yale University Library, retrieved Oct 23, 2014
[3] “Linux chmod command sticky bit example and implementations”, ComputerNetworkingNotes.com, retrieved Oct 23, 2014

Installing matplotlib through pip but no plot displayed

The easiest way to install any Python package is through PyPI. So, matplotlib is not an exception and we installed it on a CentOS (v6.4) machine without any errors (of course after updating numpy).

But when we tried to plot something we couldn’t see anything. This is actually a backend issue, and when we installed matplotlib there was no support for any backend (except the default agg which is supplied with matplotlib).

To solve this, we first installed the pygtk-dev version and then re-istalled matplotlib, through

pip install matplotlib

which was build now with the GTKAgg as default backend.

Happy plotting !

Updating clock time through terminal

In my Fedora 14 desktop I keep losing minutes without knowing how and why (doesn’t the clock update automatically?).
At the beginning I tried to change the file /etc/ntp.conf (edit it and change the parameter server to: ‘server pool.ntp.org’; 1) as perhaps the server did not respond correctly.
I tried to update by:
ntpdate pool.ntp.org

but the result was not the expected one, but an error: “… the NTP socket is in use, exiting”
I stopped (/etc/init.d/ntpd stop) and tried to updated (ntpdate pool.ntp.org) but another error was raised: “… no server suitable for synchronization found” (reasonable though since …ntd was down!).

By looking a little bit around the solution [2] was to update while running as:
ntpdate -u pool.ntp.org

and…that’s it! I removed the extra entry in the /etc/ntp.conf (to keep the original servers only) and it worked again.

Now let’s see if it’s going to keep up or I will need to manually update the clock from time to time.

[1]: Cyberciti.biz – Synchronize the system clock to Network Time Protocol (NTP) under Fedora or Red Hat Linux
[2]: Superuser.com – Socket is in use

Fast notes on screen

> list sessions:
screen -ls

> resume session:
screen -r

> reattach (if necessary detach first):
screen -d -r

> detach session:
screen -d

> detach session (while running – keyboard shortcut):

> kill session (inside session):
or the more obvious … “exit”

> kill session (outside, list sessions first):
screen -S session_number -X quit
(-S session’s name, -X send commant to the running screen session)

Linux backup and restore filesystem –notes

A quick note on how to backup and restore (if necessary) the filesystem. We go to the root directory (/) and as root we run:

tar cvpzf backup.tgz --exclude=/lost+found --exclude=/backup.tgz /

c: create new archive
v: verbose
p: preserve permissions
z: gzip
f: use file (name of file=backup.tgz)

We exclude all the directories that we don’t want to back up (and especially the backup file itself!) and … run it!

In order to restore the system, we go again to the root directory and (as root) we run:

tar xvpfz backup.tgz -C /

WARNING: this will overwrite everything we have under your root directory (everything … means everything!).

-C: change to directory

After that, we re-create the excluded directories (like: mkdir lost+found) and we reboot our system!


Zip multiple directories separately

In order to zip simultaneously a number of directories, but keep them separate, the following command will just do the job (in bash):

for i in *; do zip -r $i.zip $i; done

(caution: all directories inside which this command is executed will be zipped, so make a separate directory only to create the zip files)