söndag 27 november 2016

Removing hot pixels in a stacked image

Sometimes even an aggressive hot pixel filter won't remove all hot pixels. Here's a technique that can remove any residual hot pixels in a final stacked image. I use PixInsight's Morphological Transformation with a starmask to remove these nuisances.
Here's a crop of an image, showing what I'm talking about. The image was taken with a DSLR and consists of a stack of 10 sub frames exposed for 15 minutes each at ISO 800. My camera, a Pentax K20D, is getting old, and I always have lots of hot pixels in my images. Calibration removes most, but frequently a number remain after image integration. The technique which I describe here will dim the remaining pixels.
hot pixels after stacking
I start with making a Luminance copy of the image in its linear state, and apply STF to this grayscale image. Then I use the StarMask tool with a low value for Scale (typically 3 works ok) and a noise threshold of 0.5 (to be experimented with). I decrease large-scale, small-scale and compensation (1, 0, 1) and smoothness (about 6 - 8). Then apply the mask tool to the luminance copy. It may be necessary to tweak the parameters. No stars should be in the "Star-Mask" that is created.
When I'm satisfied, I apply the mask to the original colour image.
For pixel removal I use Morphological Transformation with Morphological Median as operator. Amount to about 0.5, iterations to 4 - 5, and Structuring Element to 9 pixels with a circular pattern.
Apply the tool to the image. If hot pixels of a certain colour remain, I split the RGB channels and use the channel that has the remaining hot pixels to repeat the process. The result is this.
Same crop after hot pixel removal
Further tweaking of the star mask and morphology parameters can improve this result even more, of course.

söndag 20 november 2016

First steps in guiding

Finally I have taken the plunge and invested in a guiding setup. I decided for the SkyWatcher ST80 scope with ZWO ASI120MM camera. The camera is the older USB2 version.
As I don't want to take my laptop out in the field, I intend to use a RaspberryPi as a guiding computer.
The last couple of days and nights, I have been trying to get this to work. My configuration at the moment is this:
ASI120MM connected to RaspberryPi, running Ubuntu Mate as an operating system.
The Pi also holds an INDI server and the lin_guider software. The camera connects to the Pi and receives guiding pulses which it sends on to the mount (SW AZ-EQ6 GT) via the ST4 port.
Installation was quite straightforward, despite warnings that the camera driver may not be stable. Setting the exposure time to 1 sec in Lin_guider seems to work fine though.
Last night, despite partial cloud cover, I was able to test the guiding, and it worked fine.
Lin_guider connected to the camera, and frames started to flow in. Focussing was a bit of a hassle. I had to take my laptop out (despite the dew), and because there is no live view, it took a while to get focus right. In the end I had my setup guiding on Vega (which was grossly overexposed at any gain setting), and later on a nearby much fainter star. This worked fine until the stars disappeared behind my neighbour's trees and clouds rolled in.
I haven't tried imaging yet, and I still have to figure out the best settings for PID gain, but so far so good.

lördag 3 september 2016

Creating a customized "Batch Process" in PixInsight

Some processes in PixInsight are adapted for large batches of images. But sometimes you want to do a sequence of process steps, for which there is no batch process, on several images. Opening each image and applying a number of processes is quite tedious.
Fortunately, PixInsight has a solution for this. It involves an image container and a process container.
For any process in PI, if you drag the small triangle in the lower left corner to an image, it will apply that process to the image. This can also be applied to a collection of images, if these are in an image container. And the process doesn't have to be a single process, it can be any number of processes that are in a process container. How is this done?

Prepare the process container.

Open an image and apply the processes you want to batch to that and other images.
Now open the image's history explorer, which should be located on the left edge of the workspace. Drag the small triangle at the bottom left to an open area in the workspace. This will create an instance of the process history of that image as a process container in the workspace.
Now you can close the image without saving.

Create an image container

 Next create an image container by right clicking anywhere in the workspace, or press Ctrl+Alt+I.
This will create an image container in the workspace. Open the container and add the image files you want to batch process. Also supply a name for the output directory where you want the processed images to be saved. Finish by dragging the small triangle to an empty spot in the workspace. This will create a new instance of your image container, with all the images in it.
Apply the processes in the process container by simply dragging the process container onto the image container that contains the images.
That's it. You've just applied several process to a batch of images.

tisdag 16 augusti 2016

Vibration damping the EQ3 aluminium tripod

The EQ3 mount with the aluminium tripod is generally considered not to be suitable for astrophotography. Still, it's a nice, portable mount that, under the right circumstances, can produce relatively good images.
There is an article on cloudynights.com that describes how the tripod can be beefed up. The author of that article increases the weight of the tripod by putting rebar in the upper legs, and a rectangular wooden dowel in the lower legs.
The problem with the tripod is not just a weight problem, but rather a vibration problem.
Filling up the hollow legs with dowels and rebar, doesn't necessarily improve the vibration characteristics of this mount. A person commenting on the cloudynights article, suggested that the legs can also be filled with sand. This will result in both a heavier tripod and different vibration characteristics.
I decided to modify my tripod by inserting wooden dowels in the upper and lower legs. But I also secured these dowels to the plastic and aluminium structure. Hopefully, this will improve the vibration damping of the tripod, without it becoming too heavy.
Wooden dowel cut to size, ready to be inserted in the lower leg
Starting with the lower parts of the legs, I removed all the plastic parts and inserted oak dowels into the aluminium tubes. I noticed that the plastic feet of the tripod are hollow and extend a bit up into the legs. By making the dowels somewhat thinner, and drilling a hole, where the hole in the plastic is, I could fasten the wooden dowel to the plastic foot and later the aluminium leg, and even the top lid of the leg.
Wooden dowel will be secured to the plastic fott and the aluminium leg
Top part of the lower leg
I then inserted two round beech dowels (12 mm diameter) into the lower parts of the legs, making sure there was a tight fit at either end. Unfortunately, it's not possible to fasten these dowels, other than through a tight fix and the small screws that hold the leg spreader in place.
One half of an upper leg

Dowels inside the upper leg
It doesn't take long to get all three legs done.
All three legs completed. Time for reassembly
Finally, reassembling the tripod, it looks as before.
The tripod now weighs 3.6 kg, not much more than before, but it feels steadier.
For a short while I also thought of filling the tripod with sand. I found out that the tripod legs are not sealed at the lower ends, and most of the sand will run out after a while. Filling the tripod, will also make it much heavier. Hopefully, the wooden dowels will improve the damping.
Now all that remains is a clear night to test the tripod.

lördag 13 augusti 2016

First experience with INDI on Raspberry Pi - part 2

Last week , when I tried to control my mount through indi on a Raspberry Pi, I managed to install the server and connect from my laptop to the indi server on the RPi. However, the mount didn't respond. It turned out that the USB serial cable didn't work anymore.
Yesterday I received an EQDIR cable from FLO, and connected it to the mount. After some adjustment of the parameters in linux and the Indi client, it all worked perfectly.
Now I can control my mount from PixInsight or any client that can run the indi protocol.
The next step will be to install and test servers.
Short recap of the installation so far.
  1. Install an Ubuntu Mate image on a SD card for the RPi
  2. Connect the RPi to the home Wifi network and set parameters to connect to PuTTY
  3. Connect to the indi repository, download and install the indi server
  4. Set $USER for dialout permission
  5. Create a permanent USB entry for the connector
  6. Start the server
  7. Start the client and connect to the server
  8. Configure the site and the mount in the client
So far PixInsight can connect to the mount and send goto commands. With the search capability, I can just search for say, M27 and the mount will slew to it.
Of course, this assumes that the mount is aligned, and so far PixInsight can't do a 2-star alignment.
I just hope that this will be implemented soon.
For the time being, my intended workflow is as follows.
  1. Haul out the mount and set up
  2. Level mount
  3. Start mount with SynScan
  4. Do a polar and a 3-star alignment
  5. Park the mount and power off
  6. Disconnect the SynScan
  7. Connect the RPi and boot
  8. Connect the client
Further testing is delayed by clouds :-(

onsdag 3 augusti 2016

Note on Dynamic Background Extraction

Astroimages allmost always have a background gradient that needs to be removed. Gradients can have two basic causes; either they are due to limitations of the optical system (vignetting), or to uneven illumination of the night sky. Most of us live and photograph in light polluted environments, and our astroimages incorporate stray light from street lamps or city lights. Even when photographing from a dark site, there is the inevitable sky glow. Whatever the cause of an uneven background, it is seldom something we want incorporated in our images.
PixInsight has two processes for gradient removal; Automatic Background Extraction (ABE) and Dynamic Background Extraction (DBE). These two processes work slightly different from each other, so it is a good thing to know them both. ABE is an automatic process, that does most of the work for you, especially the more laborious part of placing samples in the image. DBE on the other hand, allows for more user control.
In this article, I intend to give my experience of the DBE process, and how I use the various settings in the DBE control window.
Shortly, what you do with DBE is take samples of the background in your image and create a model of the image background, based on those samples. (Note that I assume you are working with an RGB colour image.)

Dynamic Background Extraction
When you open the DBE process (Process | BackgroundModelization | DynamicBackgroundExtraction), you start with connecting it to an image, the target,  in your workspace. This is done by either clicking in the image you wish to connect to, or clicking the reset icon at the bottom right of the control window (the four arrows pointing inwards). The latter option will also reset all settings in the control window. The active image is now linked to the process and it shows the symmetry lines that can be used by DBE. More on the symmetry lines in a moment.

Target View

Each time you click in the target window, a new sample will be created at that position. In the target view you will see how individual pixel values will be used in the creation of the background model. Each sample has a position (anchor x, y) and a size (radius). The square field in the target view panel shows how each pixel is used in the model. This field should ideally consist of only bright pixels. If the pixels have a colour, than the pixel will only be used in the calculation of the model for that colour. The three values Wr, Wg, Wb are the weights in red, green and blue for the combined pixels in the sample. They determine how much this sample will contribute to the background model. In this view you can also determine if symmetries are to be used. If you have an image which you know has a symmetrical background (vignetting for example), then you can create samples in one place where the background is visible, and use those samples in other parts of the image, even if the background there is not visible. When you click on one of the boxes (H for horizontal, V for vertical, D for diametrical), a line will show where the sample will be used. Not that you can control the symmetry for each individual sample. Use with care.

Model Parameters

In this panel you will set how strict your model is going to be. The most important value is Tolerance. Increase this if you find that too many samples are rejected. The default is 0.5, but expect to use values up to 2.5 regularly, and in extreme cases even higher than 5 - 7. But try to keep this value as low as possible. Once you have created all your samples, and are satisfied with where you placed them, you can decrease this value somewhat and recalculate the samples, until samples are being rejected. Choose the lowest value you can get away with, as this will result in a better approximation of the true background.
Smoothing factor determines how smooth your model is going to be. If you set this to 0.0 then the background will follow your samples very strictly. Increase this value to get a smoother background model if you see artefacts in the model.

Sample Generation

DBE Sample Generation
DBE lets you create your own samples, which is great if you have an image with lots of stars or nebulosity, but it can also create samples for you.
The first parameter sets the size of the samples. The samples will be squares with "sample size" number of pixels on either side. Use the largest samples that will not cover any stars. Obviously, if you have an image of the milky way, you will need to keep this value small, or you won't be able to position samples without covering stars.
Number of samples determines the number of samples that will be created across the image. It is generally best to use more samples. If you use to few samples, your background model may not represent your true background. Even if you have a linear background, you can model it with many samples. On the other hand, if you have a more complicated background, you can't model it with say three samples.
Minimum sample weight is only important if you let the process create samples. If you know that you have a strong gradient in the background, you should decrease its value to maybe 0.5 in order to create more samples. This parameter is used with Tolerance, to create samples in areas with more gradient.

Model Image

This is where you can set how your model background will be represented as an image. This is probably the least important panel. No comments on this panel.

Target Image Correction

DBE Target Correction
This is probably the most important panel, as it is here you determine which type of gradient you want to remove. There are three options for gradient removal; none, which you would use to test settings without applying the process to your image; subtraction, which is used to remove gradients from light pollution or sky light; and division, which is used to remove gradients caused by the optical system.
Examine your image and determine the most likely cause of the gradients. If you find that you have gradients due to both vignetting and light pollution, you may have to apply the DBE process twice, but in many cases once is enough. If you need to apply DBE twice, it seems most logical to get rid of vignetting first, since it has affected all light entering your imaging setup. You would then first apply division as your correction method, and secondly apply subtraction with a new DBE process.
You can choose to view your background model, or to discard it. I always leave this option unchecked, since I want to examine my model. This is handy in case you want to refine your samples and settings. If you find that the model looks complicated, blotchy and with several colours, then you are probably overcorrecting. This may result in the loss of colour in nebulas. Make it a habit to check the background model before you discard it.
You can also choose to replace your image with the corrected version, or to create a new image. If you choose to create a new image, then that will not have any history. On the other hand, if you replace your original image, you keep its entire history. This can be handy.

How stars are handled in DBE

(This is the way I understand it works, which may be wrong)
If you place a sample over a star, you will notice that the sample will show a hole (= black) at the star position, with probably a colours band around this hole. This means that the pixels that represent the star, have a weight = 0, and will not be considered in the background model. However, the coloured band can be a halo or chromatic abberation, and the pixels will  be taken into account for the background model. To avoid this, it is always better not to place samples over stars. If you can't avoid this, then at least examine the sample carefully, and try to place it such that it's effect is minimized. Also note that since the position of the star is not taken into account, the sample consists of fewer pixels, and each pixel will have a larger contribution for the background model.

On the size and number of samples

The samples you create should represent true background. If your image has large patches of background, you can have larger samples. If on the other hand, your image has lots of nebulosity or lots of small stars, then the background will only truly be covered by small samples. Examine your image and set sample size accordingly.
Should you use few or many samples?
It seems that some people like to use few samples in an image, while others use smaller but many samples.
There is a danger that if you use many samples, some will cover nebulosity. When the correction is applied, this will lead to destruction of the target.
On the other hand, if you only place a few samples, these may not pick up the variation of the background properly.
As usual, the number of samples that you should use must depend on the image.
Theoretically, if you have a linear gradient in an image, creating just two samples would be enough to model the background. But any mistake in either of the samples will have a severe effect on the accuracy of the background model. If you use a larger amount of samples, then each individual sample will have less effect on the background model. This generally results in a better model than using just a few samples.
I have had success with using a large number of samples (20 - 25 per row, or some 400+ samples) in my images. It does however, take quite a while to place all these samples. Even if I automatically generate the samples, I still have to make sure that they don't cover stars or part of my target.
One method that I have found helpfull is to create a clone of the image that is then stretched. This allows me to see where samples can be placed, and where they should be avoided. I then place the samples on this clone, but do not apply the correction.
After placing the samples, I create a process instance on the workspace and delete the open instance. I then apply the process on the unstretched original image.

What to look for after background extraction

As I already mentioned, I always keep the extracted background image. I examine this, and if I find that the background contains traces from nebulosity, I generally undo the extraction and change the samples in my image.
I also examine the corrected image for artefacts. If samples are too close to a target or a star, there is a chance that DBE creates a dark region around this target or star. Even in this case I undo the operation and move or remove samples.
I repeat this process until there are no dark patches left where they shouldn't be, and the background looks smooth while nebulosity has been preserved.
It can take quite a while to get the extraction right, but it will make further processing easier if you spend more time on this step.

first experiences with INDI on Raspberry Pi

Now that I have invested in a proper mount, I'm also looking into remote (15 meters) operation of it.
I don't want to drag my laptop out into the garden just to have it covered with dew, and I like the size of Raspberry Pi. This, and the fact that PixInsight is moving into the direction of hardware control through the INDI protocol, made me decide to look into the INDI solution, rather than EQMOD.
So, last weekend I erased my Pi memory card and installed Ubuntu Mate. This OS was recommended on the INDI website (indilib.org ).
Now, I have very little experience with linux, and for most of the things I do, I need to follow a tutorial or google my way around. The following is probably not the best way to do it, but these are my experiences.

Installing the OS wasn't much of a problem; download and extract the image. Then use Win32DiskImager to write the OS image onto the memory card.
Started the OS, and managed to connect it to PuTTY, but in the beginning I mainly used the desktop and a terminal window in the desktop.
Installing the INDI library took some time. For some reason I couldn't register or connect to the INDI repository (mutlaqja ppa), and the desktop on several occasions reported an internal error. Finally (don't aks me how) I managed to connect to the repository and install INDI. To get this far took quite a while so I read the OS image back to windows. I figured that if I ever need to go back and reinstall the OS, at least I won't need to do it from scratch.
I managed to get INDI server up and running, and decided to rename the USB port for permanent reference. Some googling gave the answer, and some more tapping away on my keyboard (now I don't use Mate anymore, but am connected through PuTTY and WiFi).
I then connected the mount through Synscan's serial cable and a serial/usb interface.
I managed to connect from PixInsights INDI client, but the program crashed a few times. Again, don't ask me why. I have never been able to crash PixInsight, but during the past few days I managed it twice. (Mind you, I have managed to bring it to it's knees by integrating some 200+ 14 Mpixels drizzled images. But that's a different story.)
It seems that there isn't a "hello world" application that lets you test a partial setup. There isn't even a proper tutorial that covers a complete setup. It takes some googling and looking around the INDI website to get ideas and suggestions for solutions.
Anyway, I also tried connecting through Stellarium, which didn't protest and connected to the server.
Both the PI and Stellarium connections worked fine, as the server kept responding to slew requests. However, the mount didn't budge an arcsecond.
After a long time installing, uninstalling and reinstalling various things and starting and stopping the server, rebooting the RPi, etc, etc, I finally called it a night, not having moved the mount remotely at all.
I dismantled the RPi, cables, and the mount (I'm doing this more or less in the family living room), and just as I was about to disconnect the serial cable, I noticed that neither of the LEDs was lighting or blinking.
It appears that my serial/USB connecter isn't working anymore. So now I'm waiting for the HITECH EQDIR Synscan/USB interface to arrive from Firstlightoptics.
Since everything else worked fine, just plugging in the connector should make the remote setup work. Something tells me though, that it will not work from the start, even with a new cable.

The setup sofar:
RPi 2 with Ubuntu Mate, connected to PuTTY on Windows.
sudo apt-add-repository ppa:mutlaqja/ppa (works after a few tries and reboots)
sudo apt-get install indi-full
sudo adduser $USER dialout (so I don't have to be root user to use indi)
create a rules file to rename the mounts usb port, using udevadm
indiserver -m 100 -v indi_eqmod_telescope
several reboots along the way.

To do next:
Make sure that the new connector works (without the Synscan)
Make sure that the setup works (mount connected to RPi without the Synscan inbetween; indisverver controlled by Stellarium on Windows machine)
Make sure that indiserver starts up automatically after booting the RPi.
Find and install a client that lets me control the mount and will replace the Synscan.

To be continued, I guess.

onsdag 27 juli 2016

PixInsight process icons

When you process an image, it's always handy to copy the processes you use to the workspace. You can do this by dragging the little triangle in the lower left corner onto the workspace. This will preserve the settings for that instance.
The only problem with this is that all process icons get a name ProcessXX, where XX is a number.
It is easy to change this name to something more sensible:
Click on the small N on the right hand side of the icon. This will open a dialog box where you can change the icon name. If you want to add a description, you click on the small D. Use this for example to write which mask you used for the process.
process and image icons
Image icons have no description, but you can change their name. Also note that process icons are full rectangles, while image icons have one corner missing.

torsdag 21 juli 2016

The effect of dithering

Some time ago I wrote about how dithering can improve the quality of raw images.
If you control your camera and mount from a computer, you can use software to apply small mount movements between exposures. Some programs use random movements of the RA and DEC axes to avoid patterns in your stacked images.
Unfortunately, almost all camera control software is written for either Canon or Nikon cameras. Since I have an old Pentax camera, which has a quirky usb connector, I can't control it from my computer.
I've written about my ditherbox earlier. Here's an example of how it works.
This short video shows the effect of dithering. M45 was the target, and some 46 images were taken and registered. Before registering, the target is placed on different parts of the sensor according to the dithering pattern. After registering, the target is stationary, and the noise pattern moves against the dithering pattern. This is clearly seen in the video.


onsdag 13 juli 2016

Noise reduction for DSLR astro images

Astro images taken with a DSLR at a high ISO setting are noisy, and the best way to decrease the noise level is of course to take lots of images and stack these. But even then, some sort of noise reduction is necessary.
Noise in DSLR images manifests itself as intensity noise and colour noise. Think of it this way; noise is a random variation in pixel values. Pixel values can either vary in intensity, more or less intensity of the same colour, or in colour, same intensity but another colour. Both these variations have to be addressed by a noise reduction process.
Here I will show you my procedure for DSLR images.
First I apply noise reduction to the luminance or lightness (colour intensity) of the image, and then a very aggresive noise reduction to the chrominance (colour variation).
One of the most efficient luminance noise reduction methods in PixInsight is TGVdenoise. This method is especially good at reducing high frequency (or small scale) intensity noise, and is based on a diffusion algorithm. This means that it detects variation in pixel values and pushes these variation outwards, away from the pixel. As in any diffusion process, the longer you let the process continue, the stronger the spreading process will be. In the case of TGVdenoise this means that you let the process go through many iterations.

One of the best ways to use TGVdenoise was devised by Philippe Bernard. A presentation in french can be found on his website; TGVDenoise. A slightly updated version was presented on the PixInsight forum by member Tromat.
(Always have a  STF stretch applied to the image. This keeps the image in its linear state, but allows you to see on screen what the image looks like. Also, always test the settings on a small preview that contains both background and a weak signal you want to preserve.)

Before and after luminance noise reduction, using TGVdenoise

The second stage of noise reduction is to reduce colour noise, or chrominance noise. For this I use the MultiscaleMedianTransform.
In the noise reduced image, you will probably still see colour variation in the background. MMT will take care of this noise.
First you will need to create a mask that will protect the stars and target.
For this, make sure no mask is applied to your image. Extract a luminance layer (CIE L*) from the image, using the channel extraction process. Apply a histogram stretch to this channel. Make the background as dark as possible, and the stars and target as bright as possible. Don't care if pixels on either end are clipped. Then open the MultiscaleLinearTransform process and set the number of wavelet layers to one. Double click on the first layer to turn it off, and apply the process to the luminance layer. This will blur the image. If you want more blurring, undo the process; set the number of wavelet layers to two and turn both layers off. Then apply again.

Apply the luminance mask to the image and invert it. The target and stars are now masked, while the background is revealed for noise reduction.
Open the MultiscaleMedianTransform process and choose 7 wavelet layers. Set the mode to Chrominance (Restore CIE Y).
Enable noise reduction on only the first layer. Set strength to 5 and leave the other parameters as they are. Apply to a small preview.
You should see a lot of the small scale noise dissapear, but there is still a lot coarser noise left.
Increase the strength parameter to 7 and apply to the preview. Better? If you still want more noise reduction, increase strength to 10 and apply.
If you are satisfied, do the same for wavelet layer number 2.
Generally, you will need most noise reduction on the first layer (fine, single pixel scale detail), and less on higher number layers. Just test one layer at a time, until you are satisfied.
Then apply to the entire image.

MMT takes a while to run for the first time, but you will notice that it is much faster after this first time. This is because the process needs to calculate the wavelet layers. As long as you do not change the number of wavelet layers, it only does this once. The process is independent of preview size. This also means that once you've found the best settings, the process is very fast on the whole image.
Don't forget to remove the mask once you're done.
Here's a before and after image.

Before and after chrominance noise reduction, using MMT

Tip: don't delete masks, because that will break the links in the process history. Just minimise them and move to one side.
Note that in the example image, the streaks are created by residual hot pixels during the stacking process. Dithering will eliminate this effect.

BTW, here's the final image.

torsdag 7 juli 2016

Dithering in hardware

A common source of noise in astro images taken with a DSLR is hot pixels that were not removed in the calibration process.
When light frames are registered and integrated, and the tracking wasn't spot on, these hot pixels end up as streaks or "rain" in the final image.

Extreme crop of an unprocessed (but stretched) integrated image
Normally, a dark master frame is supposed to suppress hot pixels in the light frames before registration and integration. For a non cooled DSLR, it is very difficult to match the master dark to the light frames. Therefore, faulty pixels remain after the calibration process.
There are various processing tools that can be used on the raw image frames to remove hot pixels. These tools rely on filters that remove intensity values from either the individual frames, or from the stack of images that are to be integrated. By careful use of these tools, most of the noise can be reduced. The noise that remains in the final master image, can be further reduced during post processing.

Crop of the same area after processing
While the hot pixels can not be removed during data collection, the pattern they form after integration can be altered. The streaks in the first example were caused by tracking issues. If tracking had been spot on (e.g. through guiding), the hot pixels would not have formed a pattern, but be visible as bright points in the final integrated image.
If the camera is moved a few pixels in a random way, no streaks will form and the noise will not be visible. This technique is called dithering and was suggested by astrophotographer Tony Hallas.
Cameras are usually controlled by software running on a laptop, and many of these have the option to apply dithering.
However, most software is written for either Canon or Nikon cameras, and may not work with other brands. I use a Pentax K20D for all my astro work, and this camera has issues when trying to connect to a computer, so software controlled dithering is not possible for me.
The only way for me to use dithering was to sit next to the camera and manually move the camera in RA or DEC between exposures. Doing this during winter, trying to get 50+ exposures, was not my idea of a fun time.
The solution to this problem was to build my own hardware device for camera control; a ditherbox.

The device intercepts the trigger signal from the intervalometer and sends it on to the camera. In between exposures, it also tells the mount to move in either RA or DEC. It does nothing else. I have to figure out at what speed to move the mount (and program this into the handcontroller), and for how long. The box only sends a "move" signal for the whole time between exposures. This time is set in the intervalometer and is determined by the number of pixels to be moved, the pixel size, and the focal length of the lens or scope.
The heart of the ditherbox is a small microcontroller from Atmel, the ATtiny 84. All inputs and outputs are isolated through optocouplers, and it receives its power from the SynScan handcontroller. So there are no extra batteries or power cables involved.
More information on the device can be found on Stargazers Lounge. The software can be found on my Github site.
And here is an example of the benefits of dithering.
(NB: this image, while showing the same area of the sky, was taken with another focal length and under different conditions.)

Same area in the sky, imaged using dithering;
unprocessed (stretched) integrated image