DEAR PEOPLE FROM THE FUTURE: Here's what we've figured out so far...

Welcome! This is a Q&A website for computer programmers and users alike, focused on helping fellow programmers and users. Read more

What are you stuck on? Ask a question and hopefully somebody will be able to help you out!
+1 vote

I have a process that currently uses rsync to deploy files to a web server. But now I have a case where two different automated services might want to deploy at the same time. Does anyone have any thoughts on simple ways to set a mutex?

by
+1

rsync-ing to the same destination should not be a problem. What is likely to happen is that the last to "touch" a file wins the race.

2 Answers

+1 vote
 
Best answer

If you can have the two services write different files (or folders) than you don't need any locks and you can stop worrying about the problem completely. If on the other hand both processes need to sync the same files, you could use cronjobs to schedule the synchronization at different times such that you always have one or the other, but not both. If you can't avoid them deploying at the same time, you can simply touch .lock and remove the file when finished. This is what git does as well with the file index.lock. If a process crashes before it has finished though, you could be left with an orphaned lock file and you will have to delete it manually.

by
selected by
+1

Checking if file exists and "touch"-ing if it doesn't exist, it's not atomic.

0

Yeah, after I dug into it more, I saw there's a process that does something with some of the files. So I need to somehow lock them so another service can mind the busy remote. I've got something setup now that will set the lock, abort if already locked, or remove the lock if something went wrong.

+1

@mqaptheu You can attempt to write to ".lock" with the bash noclobber option.

set -o noclobber
echo > .lock

with noclobber any write will fail if the file already exists.

If the redirection operator is >, and the noclobber option to the set builtin has been enabled, the  redirection
will  fail if the file whose name results from the expansion of word exists and is a regular file.
+1 vote

There is a command called flock that you can use from bash. It works like this

flock [options] file|directory --command command

(see the manpage for other syntaxes). It will create a lock file (or folder) and then execute the command if it was able to acquire the lock on the file. You can test it like this (start the two commands in parallel):

$ flock --verbose .lock sleep 10 && echo X > output
$ flock --verbose .lock sleep 10 && echo Y > output

these will create a .lock file and write to output as soon as they can acquire the lock. You can check the lock with cat /proc/locks or lslocks.
There is a gotcha however. These are all "advisory" locks, which means all your scripts must use flock or check the lock file otherwise. If another process decides to write without checking the lock, it will not have to wait for anybody and it will not be blocked.

by
0

Nice! I'll try that out.

Contributions licensed under CC0
...