mindstalk: (Default)
Good practice for a serving storing passwords is to not do so. Rather it hashes your password and stores that: when you log in, your password is hashed and compared to the stored value. This way someone who steals the password file doesn't get anything immediately useful. (Hashing is a one way function.) To prevent dictionary and other attacks, the password is combined with a non-secret 'salt' value, then hashed. (The password file contains the salt and the hash(password+salt) value.)

More recently, good practice has become to repeatedly hash the password like 1000 times. If a computer can do a billion hashes in a second then you won't notice a slower login, but it makes a brute force attack (of a stolen password file) 1000x harder. This is called "key stretching" or "key strengthening". The description on Wikipedia says to repeatedly hash the hash value with the salt, and I wondered why that was necessary. I think I figured it out.

Say the salt is applied just once, followed by 1000 consecutive hashings. It's possible that two passwords and their salts would collide, give the same value, samevalue, say on the 3rd iteration. Since they have the same value then, they'll have the same value on every subsequent hashing, and the same stored value in the file; they're basically locked in synchrony An attacker could see that they would get two accounts for the work of one.

But by repeatedly using the salt, that's foiled. In this case, the 4th iteration would see hash(samevalue, salt1) and hash(samevalue, salt2), and diverge again due to the different salts. You can still get collisions in the password file, but it has to actually be after 1000 iterations, not at any point in between.
mindstalk: (thoughtful)
I learned a neat trick this week, which I think I can explain to non-technical readers.

The problem: say you want to have data on the cloud, to be shared or synced across multiple machines, like laptop and phone, new laptop and work phone, etc. Data like managed passwords or bookmarks or hell, emails. Data you would like to keep secret. What to do?

Approach 1: log into a cloud service provider with your passphrase and upload your data. This is terrible, they can see your data, and even if they're mostly trustworthy, a bad employee or a hacker could make off with your bank passwords.

Approach 2: log into the provider with your passphrase, and upload your data encrypted with the passphrase. This is barely any better; with standard login mechanisms, the provider sees your passphrase, even if they ideally don't store it in a visible form[1], and could trivially use it to decrypt your data and make off with your bank passwords.

Approach 3: encrypt your data with a separate passphrase, and upload that encrypted data when you log in. This is solidly secure (assuming you chose strong passphrases). Many geeks like me probably do a manual equivalent, encrypting files with gpg and copying them remotely.[2] Only problem is, you need to manage two passphrases.

Approach 4: This is what I learned, and is the approach of Firefox Sync. Good presentation, and more technically gory presentation. But I'll give my own description.

The key idea is that you don't log in with your passphrase. Instead, a credential is made *from* the passphrase, via a one-way hash. (One-way meaning the passphrase cannot be recovered from the hash.) That is what is sent to the server and used for log in. Which leaves your passphrase free to encrypt the data you upload, which is now safe because the provider never sees your passphrase. You don't have to trust them[2]; even if they broadcast your files to the world, ideally your data is safely encrypted. The provider only stores [(login credential), (passphrase-encrypted data)]. But any client you log in from can download the data and decrypt it with the passphrase you entered locally.

Put another way, the secret of your passphrase can be used to generate multiple secrets, for login and encryption, that don't generate each other. So you get the security of Approach 3 with the convenience of Approach 2.

On the flip side, if you forget and have to reset your passphrase, your encrypted data has to be thrown away; no one can recover it. That's not a big deal for Sync, especially as any device that has a copy of your data can then upload it again.

There's a bunch of complexity to the actual Firefox Sync process, but that's the fundamental insight.

***

One bit of complexity is straightforward. Approach 4 as I described it means that if you change your passphrase, your data has to be re-encrypted, which could get annoying. So instead have your client, at sign-up, generate a strong random data encryption key. Use that to encrypt your data, and encrypt the key with your passphrase. Now the provider stores [(login credential), (passphrase-encrypted key), (key-encrypted data)]. Your data is still secure since the provider never sees your passphrase or the actual key. But if you change your passphrase, only the small (login credential) and (passphrase-encrypted key) have to change, not the arbitrarily large (key-encrypted data).

***

Other bits of complexity are less interesting, or even baffling. Firefox actually encrypts your data key not with the passphrase, but with another secret derived from the passphrase. There are long steps that make brute force attacks less feasible. The key is protected with XOR rather than some fancier encryption mechanism. And weirdest of all, instead of your browser generating a data key and sending the encrypted key to the server, what happens is that the server makes up some random number ( wrap(wrap(kB)) in the Firefox writeups ) from which the client derives the key. The math works out but it's a reversal of expected flow. My best guess is that they feel they can make better random numbers than the client, which might be true if they have good hardware randomness generators one their servers.

(Though I'm not sure if it really matters; seems like they could assign '0000...' to everyone, and the data keys would still differ based on people's passphrases.)

****

[1] Passwords are supposed to be stored salted (add some non-secret extra stuff to defeat various attacks) and digested/hashed. So someone who steals the password file can't see the actual passwords. Login means you present your password, it's salted and digested and compared to what's in the file, then your password is thrown away. Whether any particular Internet site follows that protocol is another matter, which is one reason you're told not to re-use passwords across sites.

[2] In theory open-source clients can be examined, so that they can be fully trusted. How many of us do that, though? Not me... My gut feels that a single-purpose program like gpg is easier to secure, to make sure it's never doing anything like opening a network connection while decrypted my data, compared to some client or web browser that does everything, so it would be harder to make sure it was only opening the right network connections and not secretly sending your data somewhere.
mindstalk: (Default)
Kinnami's business model involves publishing a daily master stamp. First on Twitter and Facebook, then (after some weeks of annoying work) on the Ethereum blockchain. What I've learned:

* Once you have some money on the chain, being able to easily send it somewhere else is indeed a heady experience.

* Documentation is kind of shoddy and out of date. Part of the official docs has bug reports saying "this is no longer accurate" from a year ago -- still unfixed.

* Congestion is a huge, huge, huge problem. Ethereum has two 'currencies' associated with it. Ether is the main one, analogous to bitcoins; it's been somewhat volatile in dollars, going between $400 and $1000. But there's also 'gas', which pays for computation and transactions on the blockchain; a transaction includes a gas limit and a gas price (how much Ether you'll spend for an amount of gas.) And *that* price has been crazy volatile, ranging between 1 and 200. A couple days ago it was 1-2 -- which, you know, by normal standards is already hugely volatile, x2 variation -- and today it's around 60, apparently due to some Chinese app or maybe botnet.

Ethereum advocates keep talking about scaling solutions that don't materialize. They've been doing this for some years now.

* There's a family of systems using delegated proof of stake instead of proof of work: BitShares, Steem, EOS. They apparently support far higher transaction rates -- like 30,000/second instead of 3/second. Steem is used by some blogging platform; EOS is a distributed VM like Ethereum that just launched a couple months ago. Possibly I should look into switching to it.

On editor wars

2018-Apr-25, Wednesday 09:37
mindstalk: (Default)
When I was in college, I participated in vi vs. emacs editor wars. I don't recall how serious they were. My good reasons were that I was pulled into vi first, and emacs was slower to start up. At some point I'd definitely matured to finding them less interesting, and granting that emacs was probably at least as powerful as vi, or even the later vim, but I had little reason to switch.

Recently a Windows-using friend was snarking at vim as well as at git, and a bunch of good-natured ribbing ensued, but I wandered off into serious musings:

* To be honest, I'm not in a good position to say vim is 'better' than other editors, because I don't know other editors well!

* That said, vim has a lot of word-processor-like and other features which I'm fairly sure casual text editors don't. To whit:

o Macros: the ability to define or even record sequences of commands or text. I've had macros that invoked other macros.

o History: not just an unbounded undo history of text changes, but a memory for the last command and the ability to trivially repeat it, as well as a history of :commands such as replaces. I think vim also has a notion of history as a *tree*, though I've never grokked that. But the "do the last edit I did"? I use that one all the time.

o Formatting: separate commands for reformatting ordinary text and code.

o Movement: sophisticated movement commands, useful not just for moving your cursor, but for applying edit commands to (so, "delete 3 sentences", or "replace 5 paragraphs"), or (in vim) for selecting text. You could use a mouse for similar selections, but often the keyboard commands would be faster and take less arm movement.

o Marks: you can make bookmarks within files to easily jump between points, and these marks can persist across multiple files.

o IDE-like features: syntax highlighting, smart indentation, bouncing between brackets, more. A dedicated IDE is probably even better -- but if you use multiple languages, having one decent interface across them can beat having to use multiple interfaces, especially if the IDE is less sophisticated as an editor.

* Registers: if you want, you can suck text into up to 27 different clipboards.

o Completion that defaults to using the current file as a dictionary, but that can take arbitrary other dictionaries.

o And for those who can't stand funky movement keys in command mode, vim has long recognized arrow keys in editing mode, and there's a graphical vim that probably recognizes mice.

Some practical cases:

* Having to edit an IP address in lots of files:
vi *.json #edit multiple files
/192 #search for old IP
cW{new IP} #change word to new one
n. #search for next instance, and repeat last action
n. #ditto
:wn #write this file, go to the next file
n. #continue the search and replace
n.
:wn
n. #...etc, for like a dozen files

* I've had to edit the etc/hosts file on my Windows machine between two different states. Notepad++ makes a single search and replace fairly easy, and remembers the last one done, but not the last two, and the time between changes was long enough that I wasn't keeping it open to exploit undo/redo. vim could easily remember two different replace commands to invoke.

(An alternative would have been to keep two different files around, but Windows GUI doesn't make copy-and-rename that easy either, compared to a command line.)

So how does all this compare to a good Windows text editor like Notepad++? Looking at it, it does have lots of features I haven't used. There's a Macro menu, via Recording, so that's promising -- though I wonder what its notion of commands can be. The Search menu has a couple of bracket options (find matching, select to matching), though using the % key in vi would definitely be faster; also has Bookmark. Some options I doubt vim has, like 'sort lines'. (Checks: I was wrong, vim does have it.)

Overall it's a decent set of features, and for the casual or new user, the menus will be more discoverable than reading through vim documentation. But I doubt, not just from gut feeling but from comparing myself to my boss and his IDE use, that it could ever be as fast as vim mastery. Between vim and 'screen' (for switching between windows, and for cut and paste), my hands typically never have to leave the alphanumeric keyboard, not even as far as the laptop touchpad or function keys, during coding. And personally, the lack of command history would be very nearly crippling.

I was going to give Notepad++ an advantage on spellchecking, but I looked, and hey, vim has an option for that too:
:setlocal spell spelllang=en_us

And though I've never used it, there is a GUI version of vim, with at least some menus. I just found a screenshot where its Edit menu has Undo, Redo, and Repeat, and a university guide says

"Next to every menu item is a keyboard shortcut to execute that function. gVim allows a new user to easily find common functions through the menu bar, but also gives the user the keystrokes that access the function faster, and are compatible with vim and usually vi." So even the discoverability advantage of Notepad++ fades away: between it and gvim, we have two fairly powerful editors, except that the latter allows everything to be done with the keyboard and has a potent command history.

Huh. When I started this post, I was going to avoid "my editor is better", and stick to "my editor does these cool things." And then I was checking Notepad++ features and being impressed by what it had (I hadn't known it had macros). But now I seem to have argued myself into "vim, or at least gvim, quite plausibly *is* better: it can do everything[1] yours can, *and* these things unique to it (and maybe emacs.)"

[1] Not everything a great IDE can do, but I'm fine with that; I don't need my editor to compile code for me, there's a command line for that.

Edit: All that said, I just installed gvim, and its menus do seem less complete than Notepad++'s. In particular, while Repeat is there, I don't see any menu options for creating macros, plus many of the other things N++ makes available, like "sort lines". (On the IDE front, it *does* have "make" and stuff like "find next error".) I know that you can create new menus for gvim, and maybe someone has created more comprehensive ones, but a newbie won't know that. And while there's online help, it's vimmy in navigation. So I have to give newbie friendliness back to Notepad++.

Edit 2: After some further thought and reading, I'll posit that for fast and flexible general text editing, nothing beats vim mastery, not even emacs. For general programmability beyond being an editor, nothing beats emacs. For integration with some specific language, hopefully an IDE gives features that even IDE-like features of vim don't... but it probably is inferior as a text editor, which makes it a matter of tradeoffs: what do you want to optimize for? And how much do you want to be tied to a language-specific tool?
mindstalk: (Default)
Or at least my office.

I think I've snarked before that my company would be an IT department's nightmare. By last May the three of us were using three different VMs, Linux distros, and shells. My boss later fled SUSE for Ubuntu though a later version than co-worker W3. (He preferred old SUSE but newer ones weren't working for him.)

Today we wanted to install some new packages, and discovered that W3 had not been updating their distro, and that Ubuntu releases have a 9 month lifetime. Apparently meaning that the package repositories *go away*. W3 had been on 16.10, so by now, the next version 17.04 is *also* expired, and thus there is no upgrade path. Fun!

Boss tried to install 17.10 on VMWare, which was already annoying because 18.04 LTS is coming out in 4 weeks, but we can't wait for that. It turned out to be even more annoying, he wasn't getting shared directories working.

Fallback plan: install VirtualBox, and copy my image over to W3's computer. This turned out to work. I should have been more confident in that, since that's how I got my image onto my second laptop, and my personal laptop, and for a while I'd been running off an image on a USB stick until that died rapidly.

W3 probably doesn't care about using VirtualBox. W3 is a Windows-based web developer who was using Unity, so will care about a CLI-based Arch Linux install with XFCE4. But hey, what works. And actually, the fact that you start X from the command line makes it easier to try out different GUIs!

W3 did like the zsh capabilities I showed them, though never actually switched from bash. Maybe now's my chance! >)

While trying to do helpful research, I discovered that 17.10 had switched from Xorg to Wayland, but that was successful like Prohibition so they're switching back in 18.04, along with going to GNOME 3. I'm glad I'm not using Ubuntu any more...
mindstalk: (Default)
At work today, Boss suggested I look at sqlite a bit, since our client code uses it. What I thought might be a brief glance turned into hours of reading, as it became rather fascinating. For those who don't know, it's an embedded SQL database, with not much code, unlike the client/server databases of Oracle or anything else you've probably heard of. As their docs put it, they're not competing with such databases, they're competing with fopen() and other filesystem access.

They call their testing "aviation grade", possibly without hyperbole: 100% branch coverage, 100% coverage of something stronger than branches, 700x more testing code than actual library code and a lot of that generates tests parametrically... it sounds pretty nuts. They worship Valgrind but find compiler warnings somewhat useless; getting warnings to zero added more bugs than it solved. https://www.sqlite.org/testing.html

They claim "billions and billions of deployments", which sounded like humorous hyperbole until they added being on every iPhone or Android phone, every Mac or Windows 10 machine, every major browser install... There are over 2 billon smartphones, so just from the phone OS and the phone browser, you've got 4 billion installs...

They also make a pitched case for consider a sqlite database any time you'd be considering some complex file format. With almost no code to write, you'd get consistency robustness, complex queries, machine and language independence, and at least some ability to do partial writes[1], compared to throwing a bunch of files into a zipfile.

https://www.sqlite.org/appfileformat.html
https://www.sqlite.org/affcase1.html

They also had a nicely educational description of their rollback and write-ahead models. https://www.sqlite.org/atomiccommit.html
https://www.sqlite.org/wal.html

[1] I do wonder about this. One odd thing about sqlite is a looseness about types, and AIUI cramming numeric values into the smallest range that will hold them. So I'd think that if you UPDATED a value 100 to a value 1000000000000, you'd have to shuffle the trailing part of the file, compared to a format that e.g. reserved 8 bytes for a numeric type. But maybe they do buffer numeric or string storage. And not having to write the whole file, or not having to read the whole file (e.g. to decompress it) seem like at least partial wins.
mindstalk: (Default)
In April a friend introduced me to csvkit, a suite of command line tools for manipulating CSV files, including doing SQL queries against them, and that sounded cool so I made a note. A bit later, friend Z Facebooked about q, which is the worst software name ever, which also ran queries against CSV files. I made another note.

My use case is my finances, which I'd been keeping in ad hoc text files like "May2015", with some awk scripts to sum up categories in a month, and crosscheck that the overall sum matched the sum of all categories, to detect miscategorization. It worked well for that task but wasn't very flexible, and late last year I had the idea of finally going to 'proper' software. At first I assumed a spreadsheet, because spreadsheets = finances, right? But then I realized that for the queries I wanted to do, SQL was more appropriate.

So I wrote a Python script to convert my years of files into one big CSV files, with date broken down into year and day for easy queries, and my text tags converted into a category column. Then I imported it into MySQL and it was good.

But what about going forward? I spend more, and make new text files... making notes in the full format (date, year, month, day, amount, category, notes) is a pain, and I kept forgetting how to import more into MySQL, and I just let things slide.

Last night I decided to get back to it, as part of checking my spending and savings, and checked out the old tools, with this year's spending in a simpler (date, amount, notes) CSV file.

Both programs work, and I figured out sqlite for extracting month on the fly (so I can group sums by month, or compare power spending across all Junes, say.) Sample queries:

q -H -d, "select sum(amount) from ./mon where code like '%rent%'"

q -H -d, "select strftime('%m', date) as month, sum(amount) from ./mon where code like '%transport%' group by month"

csvsql --query "select Year, sum(amount) from money2 where Month='06' group by year" money2.csv
#that's against the more complex CSV


How do they compare? Probably the more important is that q is way faster, perceptually instantaneous on a 7000+ line file, while csvsql has notable startup time. Both are Python, but csvkit also requires Java, so maybe it's starting a JVM in the background.

q is much lighter, an 1800 line Python program; csvkit has a long dependency list. I tried using the Arch AUR package, but don't have an AUR dependency tracer, so ended up using 'pip install csvkit' instead.

q needs to be told that the CSV file is actually comma separated, not space-separated, and has a header; OTOH csvsql needs to be told if you want to do a query, and the file you're querying.

It looks like both only do SELECT, not UPDATE; I'd wanted to do UPDATE in cleaning up my booklog CSV file but ended up resorting to another Python script. (After trying to push everything into a real sqlite database, but failing to get the weird CSV imported correctly.)

q only does queries; csvsql does more, I dunno exactly.

q has a man page, csvkit docs are entirely online.

I'll probably be using q.

Why not use an actual database? Mostly to cut out steps: new expenditures or books read are easy to update in a text file, and if I can treat that as a database, I don't need a step to update some other DB.

mysql felt heavy and clunky, though thanks to work I now know about the '~/.my.cnf' file which can store authentication. You still need a mysqld up. sqlite3 can run directly off a file and is certainly worth considering -- though as noted, I never got it actually working.
mindstalk: (Default)
Firefox 52 dropped support for ALSA systems. Arch Linux users were insulated from this; something like the code was still there but not enabled by default, but it was in the Arch package. As of FF 54 though, poof, it's gone for good. I'm not sure exactly why Pulseaudio is avoided, but I'm still avoiding it... so need another browser.

There are lots, actually! Currently trying Seamonkey on the work VM, mostly because it's the one alternative which is both based on the same engine as FF (so familiar, and plugins should work) while having a supported package on Arch. There are a couple more, including Pale Moon, but they need the AUR, and I'm lazy.
mindstalk: (Default)
So, I play freecol. A while back, it started behaving badly -- popup windows would lose focus, and I'd have to lower and raise the window to get it back. It was an annoying ritual, but I stuck with it.

Today at work, I'd stopped working, but had some time to kill before my next event, so installed freecol on the VM. To my surprise, it behaved the way you'd expect.

Fresh install, so maybe I had broken configuration at home? Went home, nuked all the directories, tried again. Nope.

Well, the other difference is that I've been using twm at home -- it's primitive but lightweight and familiar -- but xfce on the work VM, out of necessity to get things like resizing display. So I installed xfce4 at home, and tried that... yep, freecol played nicely.

Maybe I had something wonk in my .xinitrc? Nuked it down to just running twm... nope, still bad.

So I guess something in a freecol update stopped playing well with a 1980s window manager. Oh well. Maybe I'll just switch to xfce at home (though it'll be confusing when I'm running Arch/xfce on both the VM and the host...) But I'll need to configure it, to get some key mappings, and move-to-focus.

Nope, I don't need to; they're there already. HOW? That's really spooky.

I hunt down the config -- .config/xfce4/ -- and look at the modification times. Some are tonight, but some are 1 Nov 2012. "Wait a minute."

See, sometime after putting Ubuntu on my laptop, I played around with a whole lot of graphical environments and window managers, then upgraded, and broke Ubuntu for good. But that's another story; the point is that it's suddenly plausible I installed xfce back then -- on another OS -- configured it to taste, and moved on, leaving the preferences buried in my home directory.

Well, I keep a detailed journal for a reason. I check... and yeah, while I don't mention xfce specifically, 1 Nov 2012 was a day of messing around with such things.

"Wow! So I somehow copied my home directory in toto, between laptops, picking up weird directories like .config. I'm impressed."

"...no, I'm a dumbass; it's the *same laptop*."

OTOH it *is* a whole different version of Linux. Did I install Arch on top of Ubuntu and keep my home dir, or copy out my home dir to an external hard drive, to copy back after installing Arch? I honestly don't remember, but either seems plausible, and would get the job done.

Actually there's an /old directory on the hard drive, basically an old root directory, which I think is evidence that I managed to drop Arch right onto Ubuntu after I made a copy. There's even /old/etc/os-release, saying "14.04 Trusty Tahr". (It was not trusty; it refused to boot and I switched to Arch. Though now I'm not sure how I made the copy. Maybe I did go through a hard drive?

Anyway, one way or another, five year old configuration I'd completely forgotten about stayed with me, and worked smoothly. I guess the real surprise is that xfce didn't change its configuration system in five years, not enough to break things!

Edit: on playing again, it was broken again! Waaaa. After more investigation, it seems broken with twm no matter what, even with an empty xinitrc, but with xfce, it breaks when scim is running. That's my Japanese input interface, I'm not giving that up. :(

I guess I could play in the VM. Or I could play less, that'd be good...

aliasing fi

2017-May-13, Saturday 07:12
mindstalk: (Default)
I think I mentioned not long ago that I found I'd been aliasing fi=finger which breaks if loops in my shell, and marveled that it took so long to find that. It makes more sense to me now.

1) Yeah, I didn't script much.
2) When I did do an ad hoc script at the prompt, it was a for loop.
3) Scripts you get are mostly bash scripts.
4) Even an explicitly written zsh script wouldn't have a problem: my aliases are loaded by .zshrc, which is loaded by interactive shells, i.e. not script shells[1].
5) Only when I tried pasting an if loop into a *function*, also loaded by .zshrc after my aliases, did a problem occur. Possibly it had occurred before and I simply gave up on some unnecessary function that mysteriously didn't work.

[1] This also sheds light on past failures to ssh in somewhere and invoke a function directly: not an interactive shell, so no functions loaded. When I try 'ssh ... "zsh -i script_invoking_function"', it works. So if I want remote function invocation, I'll need to use -i or to load functions outside of .zshrc.

why zsh?

2017-May-11, Thursday 21:21
mindstalk: (Default)
When I got to Caltech and discovered Unix, the default shell on the cluster was csh, with more user features than the sh at the time, but not a lot. If you got the lowdown, you could switch to the far more useful tcsh, but the sysadmin refused to make that the default for resource reasons. There was also ksh, but I never heard people talking about it.

A few years later zsh came along, and the more techie undergraduate cluster largely switched to it en masse. It was even made the default shell there.

Out in the greater world, and in the era of Linux, bash seems the default shell, pretty much incorporating much of what was good about tcsh and ksh, and also displacing any more primitive sh. zsh still is an exotic thing even Linux people may not have heard of... which is a shame, because it's so much better.

Granted, it's also way more complicated, and a lot of its cooler features have to be turned on. If you want a shell that's full-featured out of the box, there's the even more obscure 'fish'.

And bash can approach, though not catch up to zsh, with the "bash-completion" package.

But what's so cool? Well, tab-completion can be far more powerful, working not just on filenames, but environment or shell variables, command options, man pages, process numbers, and git branches. It can also go to a menu mode, for scrolling around lots of options.

(But fish will do the magic of parsing man pages on the fly to display command options. :O )

It's easy to have your prompt display the exit code of the last command, something I find pretty useful; doing that in bash requires writing your own functions.

Likewise, you can easily have sophisticated right-hand prompts.

**/ recursive directory listing, though that is something you can turn on in bash. (shopt -s globstar)

Even more extended globbing, including excluding patterns, or selecting files based on modification time within a window and other criteria.

Redirection tricks, some of which reduce the need for tee. |& pipes stdout and stderr to a program such as less. >! can clobber files even when you have noclobber on.

I'd anticipated sticking to bash for scripting, for better standards compliance/portability, but I realized that I'm not writing a package script, just in-house tools. And zsh scripting has a lot going for it. Arrays just work, while bash arrays were described Sunday as the worst of any language. I'm using the mod time glob mentioned above.

zsh can share history between shells. I find this useful and annoying -- useful now for storing and reusing commands, but also destroys the individual history of a particular window. Oh well. An impressive application was when I found myself reusing history across *machines*, where my home dir was NFS mounted.

"Named directories" mean I can collapse long pathnames in my prompt, e.g. Main/wsgi-scripts becomes just ~WS

Probably a lot more, but those come to mind.

That said, there is one odd lacuna in zsh. bash has --rc-file, to tell it to read in a custom rc (like bashrc) file after everything else. zsh... doesn't. And sometimes I would like to start a shell with a custom additional environment, e.g. from ssh.
mindstalk: (escher)
My boss apparently figured out the problem with the VMware clone: fakes3 (for faking a local Amazon S3 service) apparently behaves badly if given the local hostname (or just localhost?) rather than an IP address, and I'd probably edited the clones files to use a name because why wouldn't you.

And I tackled my VBox again, and got shared directories working! I'd found a different set of instructions, which worked for manual mount, and then even automount from fstab. Going back, I reproduced the error I'd gotten from the Arch instructions: I'd been using '/vmshare /vmshare' (Window and guest locations, a la VMware FUSE command syntax) when it actually wanted 'vmwshare /vmshare' (short name of the shared directory in VBox, guest location). "Protocol error" is a pretty terrible error message, but I can see now what I was doing wrong.

I also found VBox's Seamless Mode, which I don't quite see the deep point of yet, though it does reclaim screen space from the Windows title bar while still leaving the start/monitor bar at the bottom, but it's allegedly similar to VMware Unity Mode, except Unity says it doesn't support Linux Guests.

So VirtualBox seems strictly better than VMware in features, since it does everything one would want, while VMware doesn't do Unity or (more important to me) touchpad scrolling. OTOH we probably have the VMware clone at least behaving the way we expected it to. Though I'm not sure this was actually tested on it, I think it's a prognosis based on fixing something on the Ubuntu machine.

Also VBox is open source and doesn't charge you money for running more than one VM at once. OTOH we already paid VMware the money.

I haven't tried comparing performance.
mindstalk: (Default)
My Arch VMware still doesn't do touchpad scroll, not that I've tried.

I cloned it for my co-worker, edited the accounts, tested the system, it worked fine. Copied it to the shared hard drive, then to her laptop. And now it has quirky IP address or hostname lookup issues that we can't figure out, such that the boss decided to start over.

With OpenSUSE! He trusted the official VM tools, it didn't work. May have tried Open VM Tools, I stopped paying attention.

Co-worker moved onto Ubuntu, using an image from OS Boxes, which I view as potentially NSA/mafic front, but hey, it's not my IP. That seems to be working, possibly in all ways.

I was inspired to go back to VirtualBox, and started over from scratch. After 40-50 minutes, mostly waiting for packages to download and install, it was ready, with X and XFCE and Firefox. Display resizes, cut and paste works, even scrolling works! Everything... except shared folders; I thought I followed the Arch instructions, but I get a "Protocol Error".

Sigh.

I have continued to realize VMs are cool. I could have a second Arch VM and play with desktop environments without messing up my working one. Or play with Ubuntu and Red Hat without rebooting. Or you could skip "will Linux work on this laptop?", install VBox or QEMU on Windows, then go full screen and ignore Windows almost entirely.
mindstalk: (bujold)
LXDE was happy starting from startx, but it doesn't have a way to configure move-on-focus. Searching about "you can't" and "apply this 100 line XML file somewhere". So I moved on to xfce -- the full thing, not just xfwm. It's not happy starting from startx/.xinitrc, or I'm doing something wrong, but it provides its own startxfce4. It resizes, doesn't crash, and had a single step for turning on the One True Mouse behavior. Was also able to configure my usual X keys (I have some function keys mapped to window raise/lower/minimize). Plus it's supposed to be fairly light weight. So that's where I'm at for now. Haven't explored it much since; if it lets me move my windows around, I'm good. (I'm used twm after all, which pretty much is nothing but that.)

On the downside, touchpad scroll still doesn't work. This may be a VMware problem. I was working on a VirtualBox image, but it didn't go smoothly. First I tried export/import, which didn't seem to work -- frozen boot screen. But when I went to Close it, the proper display flashed up, and I've been able to find that it does boot and have my account. But it's not *usable*. Possibly X would fix that. I also tried a pure install, but after applying some tweak to make console resize, it didn't want to boot at all. I haven't had time to go back and try pure vanilla. And the VMware image I'm working with is getting more and more developed, it may be hard to switch.

I also found a program to make *Windows* use move-on-focus. I should probably tell you what it was, but I don't remember, and the info seems to be only on the office machine. But it's SO useful, at least for my workflow which uses overlapping windows a lot. (Often a full-screened browser or VM (or browser in VM) and some other window I'm taking notes in.)

OTOH I really wish I could make *Windows* raise and lower windows with a key.

New co-worker coming Monday, she'll need a VM too. Rather than re-installing, I simply cloned mine. Easy! And purged my limited personal info on it, a bit more work. And got the system working on it... that was a lot more work, we've got too much hardwiring of local IP address. Which will interfere with putting code in source control too, so we've got a double incentive to fix that.
mindstalk: (CrashMouse)
My new work laptop has Windows 7 Pro at base, which we need at some point, so for Linux we've been trying to put Linux into a VMware Workstation. Since I use Arch, I tried for Arch, even though it's not listed as supported. It's been a fun couple of days. Some of that my own fault: though I did wonder about boot information, I missed the "choose and install bootloader" instructions three times running. Some, well, while Arch does tell you to enable dhcp, you have to click through and read everything; it's easy to think it's up by default.

Then there's VM Tools. Supposedly even VMWare tells you to use "open-vm-tools" rather than what they provide, but a couple webpages said certain features would work with the official tools. But its installation script failed straight out of the ISO, on a clean install. That's never good...

There's a site OSBoxes.org, which provides VMWare and VirtualBox images of various OSes. No idea who they are, and I'd be paranoid about trusting some unknown OS image. OTOH, I did end up downloading a few to see if things would work at all -- Arch CLI, Arch KDE, Ubuntu.

Discovery: don't think I like KDE or Ubuntu's UI, but the latter did have full screen and cut-and-paste between Linux and Windows. The Arch ones didn't seem to, so it didn't seem worth trying to track down a difference in configuration.

One cool thing about VMs is that you get to treat 'machines' as documents. I'd started making copies and snapshots, and when messing around with official VM Tools failed and broke things, I was able to pop back to an instance before that. Woo.

And with that, trying open-vm-tools again *very carefully* and avoiding conflicting paths, I got shared folders working -- even without the auxiliary tool the docs said I would need. Sweet! But fullscreen and pasting still didn't work.

OTOH, by default I use startx and the ancient environment of twm. xfwm4 didn't 'work' either. Finally I tried Cinnamon... the window manager of which promptly crashes. But the session hangs around, and voila! fullscreen and paste! So I guess I'm going to need some sort of full session for this thing, not just a WM.

Also twm was able to take over the apps, and then the Failsafe Desktop or something becomes a window managed by twm. That's just surreal.

LXDE was happier starting from startx, and that's what I've got now.

Still missing: touchpad scrolling, which is a big loss. Hope I can get it...
mindstalk: (12KMap)
My phone (Android 4, CM 11) swipe input is weird when it comes to profanity. So, there are three levels: its first guess for your word, two alternates to the side, and then a list you can bring up. 'suck', 'sucking', 'shit', and 'dick' will never appear in the first two levels, even if I go slowly and letter by letter -- 'shi' turns into 'shot', with 'shirt' and 'shoot' as alternates. Peter thinks it's a probability weight thing; I figure it's hardwired, with a short list of words being simply barred from your being able to input them too quickly (i.e. accidentally.)

But 'fuck', 'fucking', and 'fucker' I can enter quite easily. 'cunt' too. And 'pussy', with a bit of care (it's hard to swipe a double letter.)

Peter thinks it's also probabilities weighted by how you've used the phone, but I use 'suck' and 'sucking' in texts far more than any of the others... I've probably never tried to swipe 'cunt' before.

My thought for a while was that maybe fuck* weren't in the dictionary at all, until I added them, so wouldn't be barred, but I checked my personal word list and nope, they're not there.

So I dunno. Maybe it's more of an "accidentally unprofessional" filter, like words on the edge of acceptability are barred so you don't tell your boss how much something sucks, but the designers figured if you want to go full vulgar you knew what you were doing. Not how I'd do things.... and no, I don't see a profanity filter I might have turned off.

Wait, I'm wrong! I just checked again, and yes, Android Keyboard has a "block offensive words" option which is off, so I probably did that at some point... and I'm *still* not getting 'suck' or 'shit' as choices above the third level.

***

Totally unrelatedly, I finally have Japanese input working on my laptop! 日本語よ! I'd tried UIM a while back, per the Arch Linux default recommendation, but it didn't work. Then I tried SCIM a few days ago, and it seemed to not work either, but later I found myself typing in Japanese suddenly. Ctrl-Space turns it on, Ctrl-Shift cycles through modes (Anthy [Japanese], Unicode, English/European [which doesn't seem to do anything]. I should look into configuring that, because I need Ctrl-Shift-C and -V to copy and past from/to my Terminator terminal, so there's an annoying conflict there. Still, woo!

Is this more than a toy, given my weak Japanese skills? Slightly: online dictionaries tend to work best with Japanese input, not romaji, so now I can actually use them. And I've started studying it again, so that helps.

today's events

2016-Nov-05, Saturday 15:53
mindstalk: (juggleface)
Crisis! I was going for my measuring cup, and a wine glass fell out of the cabinet and utterly shattered! Oh no! Except, it fell into the large kitchen sink. Total shrapnel containment! Except maybe for one piece on the counter, I don't know if it leapt up there or fell off when I was picking up pieces to put in the trash. But yeah, as far as shattered glass crises go, this was about as mild as it could be.

***

I'm not experienced or bold with hardware. But I've been worrying about my laptop fan for a while. Partly the temperatures reported by acpi -t[1], partly the knowledge that I'm much better at washing dishes than I am at dusting my household, and between me or my tendency to live on mildly busy streets, the dust piles up amazingly fast. So I randomly decided today to see if I could clean it out a bit. My old set of screwdrivers doesn't have one small enough for laptop screws; fortunately, in cleaning a few months ago, I found another set of screwdrivers I got who-knows-how, which does. Sometimes, hoarding really does pay off.

So I carefully took out and arrayed the screws -- I could have been more careful, put them in a cup, but I trusted my careful habits, mostly correctly in this case -- and eventually got one part of the bottom off, exposing the fan. There wasn't that much dust, actually; either it's piled up in the internals, where I'm not brave enough to go, or the fan actually works. There was some dust on the battery grill, and a lot on the fan grill -- kind of a lot: enough to obscure vision, not so much that I was rolling off felted mats. Then again, I took a hand vacuum to it quickly enough.

Apart from that, meh. Put it all back together, cleaned the table, turned it on, and hey, it still works! *And* it's claiming a much lower temperature than it usually does. Success?

But, we have some screws loose -- literally. Most of the screws are pretty short, but two were long and deeply recessed. And a couple seemed to just fall out of the laptop, too -- one long, one short. I thought I had everything lined up properly, and I don't remember any holes being screwless in the first step, but in the end I was a short screw short, despite supposedly having an extra. I don't think it'll matter, the panel is secured by two other screws (but it's a dust hole!), but, weird. As for the spare long screw, I put it in a coin purse.

[1] Such as they are: it was alternating between 56.5 C and 62.5. And not in the course of operation: one boot would be at 62, the next at 56. Now it's claiming 36.5. So I don't have a lot of faith in the accuracy, but insofar as it's measuring anything at all, that may have improved.
mindstalk: (robot)
most: the pager sequence now goes more, less, most. most's big thing seems to be displaying multiple windows. It's also good on scrolling sideways, but I just found less is too, so I'm not sure there's a difference there. OTOH, I just learned less can scroll sideways.

rlwrap: applies a readline wrapper to interactive programs that don't use readline directly, like ocaml or 'perl -de 1'. If you use rlwrap -m, then ^^ summons an editor on your input.

man 7: there's a whole lots of odd information there.

.inputrc lines for better history:
"\e[A" history-search-backward
"\e[B" history-search-forward
searches (up-arrow) with what you typed so far as a prefix. There's also ^R, which you type first, followed by what you're looking for, and use ^R again to search other matches.

I recall that years ago, zsh searched on prefix. Then for a long time it had gone to only searching on the first word. I finally got the behavior I wanted back, though it's more involved.
.zshrc:
(* edit: these lines don't actually work for me. I thought I did but I must have tested in a shell with the other ones already loaded, not in a fresh shell.
autoload -Uz up-line-or-beginning-search down-line-or-beginning-search
zle -N up-line-or-beginning-search
zle -N down-line-or-beginning-search
[[ -n "${key[Up]}" ]] && bindkey "${key[Up]}" up-line-or-beginning-search
[[ -n "${key[Down]}" ]] && bindkey "${key[Down]}" down-line-or-beginning-search
*)

or
autoload -U history-search-end
zle -N history-beginning-search-backward-end history-search-end
zle -N history-beginning-search-forward-end history-search-end
bindkey "^[[A" history-beginning-search-backward-end
bindkey "^[[B" history-beginning-search-forward-end

I don't know the difference between the two, if anyway. The -end stuff in the second case is to make it move the cursor to the end; otherwise it just leaves it where you left it, which I hate.

zsh tricks:
ls > file1 > file2, or ls > file1 | file 2. Duplicates the output. More compact than messing with tee.

I just spent an embarrassing number of seconds trying to see if "ls | cat | cat" would "duplicate output", before I remembered what piping *does*.

You can set zsh options so tab completion lets you scroll around the choices. My mind is blown. I'm not sure what the minimal set needed is.
setopt auto_menu auto_list
seems like a good starting point. But I'd just re-started the configuration wizard and turned almost everything on and stuff started being cooler.

I played with the shell fish ("friendly interactive shell") again. I'll probably never leave zsh at this point, but fish does lots of neat things out of the box, vs. having to turn them on in zsh via research or going through the startup wizard. I think zsh's completions are more powerful, but it's a close race.

[Edit: hmm, I just found that for ocaml, zsh doesn't provide anything, but fish does. I'd guess fish is doing its parsing of man pages thing, rather than knowing about ocaml from installation.]

kill completions:
fish:
phoenix@mindstalk ~/zoot> kill 
1            (systemd)  1430           (bioset)  1985  (systemd-journal)
2           (kthreadd)  1431           (bioset)  1990          (kauditd)
3        (ksoftirqd/0)  1432           (bioset)  2155    (systemd-udevd)
5       (kworker/0:0H)  1466           (bioset)  3048            (crond)
…and 46 more rows


zsh:
[mindstalk:0] kill 3360
 3360 pts/3    00:00:04 zsh                                                    
18729 pts/3    00:00:00 zsh                                                    
18730 pts/3    00:00:00 ps 


bash (with bash-completion package):
[phoenix@mindstalk zoot]$ kill 
Display all 151 possibilities? (y or n)
1      1422   1474   1499   15636  18739  2      3048   3337   491    779
10     1423   1477   1500   16300  18740  2155   3049   3338   5      780
10445  1424   1480   1501   16972  18741  23548  3050   3353   637    782

(and lots more PIDs).

As you can see, fish and zsh try to give you useful information, or at least a name. zsh seems limited to the tty process, which bash lists all the processes, regardless of whether you can kill them. fish also lists every process. No doubt zsh completion could be configured to do so as well. (I'd actually want just listing all of my processes, not like I can kill root ones.)

Likewise, bash's idea of command option completion is to just list them; the other two shells give descriptions. (What I really learned tonight was that bash does such advanced completion at all.)

sudo !!
A very old simple trick for when you try to do something but it needs sudo. If I'd known about !! I'd forgotten until today, though I knew about !num to get at a specific history command.
mindstalk: (Default)
On IRC we'd been discussing procmail, and its lack of maintenance, and whether it *needs* maintenance other than security fixes. I snarked about wc not needing updates... then checked and found that its web page was dated Jan 2016, because GNU. This led to Ian complaining about ls having too many options, and he didn't even know about the dired output ones for emacs integration. I count about 56 options. That's a lot!

OTOH, I use a lot of them:

All my aliases use -F and -color=auto.
lt uses -ltr
Others use u, A, s, h, and d. That's 10.

I discovered L recently, and found it useful. Others on the list look interesting: --group-directories-first, R, S, X. 14 total! Still a fraction of the total, but I'm not going to say the others are useless.

Are they redundant with the Unix way? E.g. all the sort options could instead be piped to /bin/sort. OTOH that would be more verbose, and less efficient, especially for e.g. a numeric sort on filesize: easier to sort within ls, which has the numbers as numbers, rather than to print them as text to stdout, read them in again and convert, then print out again. Or more commonly, sorting by modification time, as a human readable thing? Ew.

*** Reference

-F: append / for directories and * for exectuables and @ for symlinks.
-color: colors by type
-l: detailed listing
-t: sorts by modification time, newest first
-r: reverses sort
-u: show last access time
-A: show dotfiles, but not . and ..
-s: show file size in blocks
-h: print size in human friendly form, like 4.3M
-d: shows properties of a directory, rather than its contents.
--group-directories-first: duh
-R: recursive
-S: sort by size, biggest first.
-X: sort by extension.

machine go boom

2016-Sep-06, Tuesday 03:00
mindstalk: (Default)
College (and other) friends and I have shared a server for many years, racked in some colo place. This instance, the third, was bought in 2003, and has served us far longer than we expected. In the past couple days we basically got to watch the RAID die in real time. Still not sure if the disk filling up was a trigger or result or unrelated, but today I watched it die with only 88% full disk. I got to see even some of my own files turning corrupt, like being owned by another user.

Robbie and another friend had unkind things to say about hardware RAID. We'd gotten hardware RAID, 3wire, set to redundancy mode for the server. We'd thought we were doing really well, with some tool reporting no disk failures... now someone else says it may have lied, with disk problems we weren't told about.

OTOH other friends say software RAID really wouldn't give performance or even safety guarantees. I dunno. But the damn thing did survive 13 years of probably somewhat heavy use, with our disks from one vendor; we sure got our money's worth.

The question now becomes "what next?" A bunch of us were still using it as an active server, like for mail, so a replacement would be nice. Previous machines were graciously retired and replaced on a plan; I'd kept urging us to go to machine 4 over the past few years, but people were lazy, and I was in no position to physically volunteer.

Of course, today we have VPS. Since I cleverly had mail going to my own domain, hosted on the server, I found I was able to get my own linode, transfer DNS, and get basic mail working, in under 3 hours. Hopefully at this point I won't *lose* mail, though I have yet to get procmail -- or some more secure replacement -- up; I really depend on filtering. And I don't know about spam... we had greylisting going, which probably prevented a lot of spam even before my powerful spamprobe filter; right now I'm exposed. But it's after 3am, it can wait a day or two.

Anyway, someone could probably replace our machine with a VPS quickly... if they had control over our DNS. That's probably one guy, on vacation right now. Whee. Also, while I backed up my own files, I never thought to grab the passwd or shadow files; if no one else did either, actually making accounts for everyone would be a pain.

February 2019

S M T W T F S
      12
3456 7 89
101112131415 16
17181920212223
2425262728  

Expand Cut Tags

No cut tags

Style Credit