Return to site

Dms Valenciahidden Hills Stable

broken image


  1. Dms Valenciahidden Hills Stable Life
  2. Dms Valenciahidden Hills Stables

Been out on the Dusty trail? Working hard in the fields? Searching for gold in the hills? Well, Whatever your inclination, Welcome to Hill Valley, population 891. We have a saloon, Stables, a sheriff, a mercantile and a doc. What more could you need? This is the Wild west, So sit down, wet your whistle and throw in a hand at the card table.

Their traditional Carthenni (throws) are deeply rooted in the culture of Welsh textile techniques but always in the modern idiom. They adapt design aspects used by the old mills to produce beautiful Carthenni which have become heirlooms treasured by a new generation. Camera Solve Overview¶. The camerasolve tool is implemented as a Python wrapper around two other tools. The first of these is the the THEIA software library, which is used to generate initial camera position estimates in a local coordinate space. Cally stable coast, and relative sea level is the level that the sea actually reached on the land. As Flint (1966, p. 674) points out, no independent datum is available from which absolute changes in sea level can be deduced from relative sea level because no distinction is possible between eustatic variations and crustal movement.

The ASP tool camera_solve offers several ways to find the trueposition of frame camera images that do not come with any attached posemetadata. This can be useful with aerial, hand-held, and historicalimages for which such information may be incomplete or inaccurate.

An overview of the tool and examples are provided in this chapter.Reference information for this tool can be found in Section 12.9.

This tool can be optionally bypassed if, for example, the longitude andlatitude of the corners of all images are known (Section 9.4).

9.1. Camera Solve Overview¶

The camera_solve tool is implemented as a Python wrapper around twoother tools. The first of these is the the THEIA software library, whichis used to generate initial camera position estimates in a localcoordinate space. You can learn more about THEIA athttp://www.theia-sfm.org/index.html. The second tool is ASP's ownbundle_adjust tool. The second step improves the solution to accountfor lens distortion and transforms the solution from local to globalcoordinates by making use of additional input data.

The tool only solves for the extrinsic camera parameters and theuser must provide intrinsic camera information. You can use thecamera_calibrate tool (see Section 12.7) or othercamera calibration software to solve for intrinsic parameters ifyou have access to the camera in question. The camera calibrationinformation must be contained in a .tsai pinhole camera model fileand must passed in using the --calib-file option. You can finddescriptions of our supported pinhole camera models inSection 15.1.

If no intrinsic camera information is known, it can be guessed by doingsome experimentation. This is discussed in Section 9.5.

In order to transform the camera models from local to world coordinates,one of three pieces of information may be used. These sources are listedbelow and described in more detail in the examples that follow:

  • A set of ground control points of the same type used by pc_align.The easiest way to generate these points is to use the ground controlpoint writer tool available in the stereo-gui tool.

  • A set of estimated camera positions (perhaps from a GPS unit) storedin a csv file.

  • A DEM which a local point cloud can be registered to usingpc_align. This method can be more accurate if estimated camerapositions are also used. The user must perform alignment to a DEM,that step is not handled by camera_solve.

Dms Valenciahidden Hills Stable Life

Power users can tweak the individual steps that camera_solve goesthrough to optimize their results. This primarily involves setting up acustom flag file for THEIA and/or passing in settings tobundle_adjust.

9.2. Example: Apollo 15 Metric Camera¶

To demonstrate the ability of the Ames Stereo Pipeline to process ageneric frame camera we use images from the Apollo 15 Metric camera. Thecalibration information for this camera is available online and we haveaccurate digital terrain models we can use to verify our results.

First download a pair of images:

Fig. 9.1 The two Apollo 15 images (AS15-M-0414 and AS15-M-1134).

In order to make the example run faster we use downsampled versions ofthe original images. The images at those links have already beendownsampled by a factor of (4 sqrt{2}) from the original images.This means that the effective pixel size has increased from five microns(0.005 millimeters) to 0.028284 millimeters.

The next step is to fill out the rest of the pinhole camera modelinformation we need. Using the data sheets available athttp://apollo.sese.asu.edu/SUPPORT_DATA/AS15_SIMBAY_SUMMARY.pdf we canfind the lens distortion parameters for metric camera. Looking at theASP lens distortion models in Section 15.1, we see that the descriptionmatches ASP's Brown-Conrady model. Using the example in the appendix wecan fill out the rest of the sensor model file (metric_model.tsai) so itlooks as follows:

These parameters use units of millimeters so we have to convert thenominal center point of the images from 2024 pixels to units ofmillimeters. Note that for some older images like these the nominalimage center can be checked by looking for some sort of marking aroundthe image borders that indicates where the center should lie. For thesepictures there are black triangles at the center positions and they lineup nicely with the center of the image. Before we try to solve for thecamera positions we can run a simple tool to check the quality of ourcamera model file:

It is difficult to tell if the distortion model is correct by using thistool but it should be obvious if there are any gross errors in yourcamera model file such as incorrect units or missing parameters. In thiscase the tool will fail to run or will produce a significantly distortedimage. For certain distortion models the undistort_image tool maytake a long time to run.

If your input images are not all from the same camera or were scannedsuch that the center point is not at the same pixel, you can runcamera_solve with one camera model file per input image. To do sopass a space-separated list of files surrounded by quotes to the–calib-file option such as–calib-file'c1.tsaic2.tsaic3.tsai'.

If we do not see any obvious problems we can go ahead and run thecamera_solve tool:

We should get some camera models in the output folder and see a printoutof the final bundle adjustment error among the program outputinformation:

We can't generate a DEM with these local camera models but we can runstereo anyways and look at the intersection error in the fourth band ofthe PC.tif file. While there are many speckles in this example wherestereo correlation failed the mean intersection error is low and wedon't see any evidence of lens distortion error.

The tool point2mesh (Section 12.41) can beused to obtain a visualizable mesh from the point cloud.

In order to generate a useful DEM, we need to move our cameras fromlocal coordinates to global coordinates. The easiest way to do thisis to obtain known ground control points (GCPs) which can beidentified in the frame images. This will allow an accurate positioningof the cameras provided that the GCPs and the camera model parametersare accurate. To create GCPs see the instructions for the stereo_guitool in Section 12.3.1. For the Moon there are several ways to getDEMs and in this case we generated GCPs using stereo_gui and aDEM generated from LRONAC images.

After running this command:

we end up with results that can be compared with the a DEM created fromLRONAC images. The stereo results on the Apollo 15 images leavesomething to be desired but the DEM they produced has been moved to thecorrect location. You can easily visualize the output camera positionsusing the orbitviz tool with the –load-camera-solve option asshown below. Green lines between camera positions mean that a sufficientnumber of matching interest points were found between those two images.

For GCP to be usable, they can be one of two kinds. The preferred optionis for each of at least three GCP to show up in more than one image.Then their triangulated positions can be determined in local coordinatesand in global (world) coordinates, and bundle_adjust will be able tocompute the transform between these coordinate systems, and convert thecameras to world coordinates.

If this is not possible, then at least two of the images should have atleast three GCP each, and they need not be shared among the images. Forexample, for each image the longitude, latitude, and height of each ofits four corners can be known. Then, one can pass such a GCP file tocamera_solve and also with the flag:

and it will attempt to transform the cameras to world coordinates.

Next, one can run stereo.

Fig. 9.2 Left: Solved-for camera positions plotted using orbitviz. Right:A narrow LRONAC DEM overlaid on the resulting DEM, both colormappedto the same elevation range.

ASP also supports the method of initializing the camera_solve toolwith estimated camera positions. This method will not move the camerasto exactly the right location but it should get them fairly close and atthe correct scale, hopefully close enough to be used as-is or to berefined using pc_align or some other method. To use this method,pass additional bundle adjust parameters to camera_solve similar tothe following line:

The nav data file you use must have a column (the 'file' column)containing a string that can be matched to the input image files passedto camera_solve. The tool looks for strings that are fully containedinside one of the image file names, so for example the field value2009_10_20_0778 would be matched with the input file2009_10_20_0778.JPG.

Section 5 will discuss the stereo programin more detail and the other tools in ASP.

9.3. Example: IceBridge DMS Camera¶

The DMS (Digital Mapping System) Camera is a frame camera flown on aspart of the NASA IceBridge program to collect images ofpolar and Antarctic terrain (http://nsidc.org/icebridge/portal/) thatwe can use to produce digital terrain.

To process this data the steps are very similar to the steps describedabove for the Apollo Metric camera but there are some aspects whichare particular to IceBridge. You can download DMS images fromftp://n5eil01u.ecs.nsidc.org/SAN2/ICEBRIDGE_FTP/IODMS0_DMSraw_v01/. Alist of the available data types can be found athttps://nsidc.org/data/icebridge/instr_data_summary.html. Thisexample uses data from the November 5, 2009 flight over Antarctica.The following camera model (icebridge_model.tsai) was used (seeSection 15.1 on Pinhole camera models):

Note that these images are RGB format which is not supported by all ASPtools. To use the files with ASP, first convert them to single channelimages using a tool such as ImageMagick's convert,gdal_translate, or gdal_edit.py. Different conversion methodsmay produce slightly different results depending on the contents of yourinput images. Some conversion command examples are shown below:

In the third command we used gdal_translate to pick a single bandrather than combining the three.

Obtaining ground control points for icy locations on Earth can beparticularly difficult because they are not well surveyed or becausethe terrain shifts over time. This may force you to use estimatedcamera positions to convert the local camera models into globalcoordinates. To make this easier for IceBridge data sets, ASPprovides the icebridge_kmz_to_csv tool (seeSection 12.22) which extracts a list of estimatedcamera positions from the kmz files available for each IceBridgeflight at http://asapdata.arc.nasa.gov/dms/missions.html.

Another option which is useful when processing IceBridge data is the--position-filter-dist option for bundle_adjust. IceBridge datasets contain a large number of images and when processing many at onceyou can significantly decrease your processing time by using this optionto limit interest-point matching to image pairs which are actually closeenough to overlap. A good way to determine what distance to use is toload the camera position kmz file from their website into Google Earthand use the ruler tool to measure the distance between a pair of framesthat are as far apart as you want to match. Commands using these optionsmay look like this:

Alternatively, the camera_solve executable can be bypassedaltogether. If a given image has already an orthoimage associated withit (check the IceBridge portal page), that provides enough informationto guess an initial position of the camera, using the ortho2pinholetool. Later, the obtained cameras can be bundle-adjusted. Here is howthis tool can be used, on grayscale images:

Fig. 9.3 Left: Measuring the distance between estimated frame locations using Google Earthand an IceBridge kmz file. The kmz file is from the IceBridge website with no modifications.Using a position filter distance of 2000 meters will mostly limit image IP matchingin this case to each image's immediate 'neighbors'. Right: Display of camera_solveresults for ten IceBridge images using orbitviz.

Some IceBridge flights contain data from the Land, Vegetation, and IceSensor (LVIS) lidar which can be used to register DEMs created using DMSimages. LVIS data can be downloaded atftp://n5eil01u.ecs.nsidc.org/SAN2/ICEBRIDGE/ILVIS2.001/. The lidar datacomes in plain text files that pc_align and point2dem can parseusing the following option:

ASP provides the lvis2kml tool to help visualize the coverage andterrain contained in LVIS files, see Section 12.29for details. The LVIS lidar coverage is sparse compared to the imagecoverage and you will have difficulty getting a good registration unlessthe region has terrain features such as hills or you are registeringvery large point clouds that overlap with the lidar coverage across awide area. Otherwise pc_align will simply slide the flat terrain toan incorrect location to produce a low-error fit with the narrow lidartracks. This test case was specifically chosen to provide strong terrainfeatures to make alignment more accurate but pc_align still failedto produce a good fit until the lidar point cloud was converted into asmoothed DEM.

Fig. 9.4 LVIS lidar DEM overlaid on the ASP created DEM, both colormapped tothe same elevation range. The ASP DEM could be improved but theregistration is accurate. Notice how narrow the LVIS lidar coverageis compared to the field of view of the camera. You may want toexperiment using the SGM algorithm to improve the coverage.

Other IceBridge flights contain data from the Airborne TopographicMapper (ATM) lidar sensor. Data from this sensor comes packed in one ofseveral formats (variants of .qi or .h5) so ASP provides theextract_icebridge_ATM_points tool to convert them into plain textfiles, which later can be read into other ASP tools using theformatting:

To run the tool, just pass in the name of the input file as an argumentand a new file with a csv extension will be created in the samedirectory. Using the ATM sensor data is similar to using the LVIS sensordata.

For some IceBridge flights, lidar-aligned DEM files generated from theDMS image files are available, see the web page here:http://nsidc.org/data/iodms3 These files are improperly formatted andcannot be used by ASP as is. To correct them, run thecorrect_icebridge_l3_dem tool as follows:

The third argument should be 1 if the DEM is in the northern hemisphereand 0 otherwise. The corrected DEM files can be used with ASP like anyother DEM file. Atsc hdtv tuner box.

Section 5 will discuss the stereo programin more detail and the other tools in ASP.

Dms Valenciahidden Hills Stable

9.4. Solving for Pinhole cameras using GCP¶

If for a given image the intrinsics of the camera are known, and alsothe longitude and latitude (and optionally the heights above the datum)of its corners (or of some other pixels in the image), one can bypassthe camera_solve tool and use bundle_adjust to get a roughinitial camera position and orientation. Moddb imperial splendour. This simple approach is oftenbeneficial when, for example, one has historical images with roughgeo-location information. Once a rough camera is created for each image,the cameras can then be bundle-adjusted jointly to refine them.

To achieve this, one creates a camera file, say called init.tsai,with only the intrinsics, and using trivial values for the camera centerand rotation matrix:

Next, one creates a ground control points (GCP) file (Section 12.3.1),named, for example, gcp.gcp, containing the pixel positions andlongitude and latitude of the corners or other known pixels (theheights above datum can be set to zero if not known). Here is asample file, where the image is named img.tif (below the latitudeis written before the longitude).

Such a file can be created with stereo_gui (Section 12.45.2.2).

One runs bundle adjustment with this data:

which will write the desired correctly oriented camera file. Using apositive number of iterations will refine the camera.

It is important to look at the residual file:

after this. The third column in this file is the optimized heights abovethe datum, while the fourth column has the reprojection errors from thecorners on the ground into the camera.

If bundle adjustment is invoked with a positive number of iterations,and with a small value for the robust threshold, it tends to optimizeonly some of the corners and ignore the others, resulting in a largereprojection error, which is not desirable. If however, this thresholdis too large, it may try to optimize the camera too aggressively,resulting in a poorly placed camera.

Dms Valenciahidden Hills Stables

Sometimes it works to just get a rough camera estimate from this toolfor each image individually, using zero iterations, as above, and thenbundle adjust all images together with the obtained rough cameras andpossibly also using the GCP files, this time with a positive number ofiterations.

One can also use the bundle adjustment option --fix-gcp-xyz to notmove the GCP during optimization, hence forcing the cameras to move moreto conform to them.

ASP provides a tool named cam_gen which can also create a pinholecamera as above, and, in addition, is able to extract the heights of thecorners from a DEM (Section 12.6).

9.5. Solving For Intrinsic Camera Parameters¶

If nothing is known about the intrinsic camera parameters, it may bepossible to guess them with some experimentation. One can assume thatthe distortion is non-existent, and that the optical center is at theimage center, which makes it possible to compute cu andcv. The pitch can be set to some small number, say(10^{-3}) or (10^{-4}.) The focal length can be initializedto equal cu or a multiple of it. Then camera_solve can beinvoked, followed by stereo, point2mesh, andpoint2dem--errorimage. If, at least towards the center of theimage, things are not exploding, we are on a good track.

Later, the camera parameters, especially the focal length, can bemodified manually, and instead of using camera_solve again, justbundle_adjust can be called using the camera models found earlier,with the options to float some of the intrinsics, that is using--solve-intrinsics and --intrinsics-to-float.

If the overall results look good, but the intersection error afterinvoking point2dem around the image corners looks large, it is timeto use some distortion model and float it, again usingbundle_adjust. Sometimes if invoking this tool over many iterationsthe optical center and focal length may drift, and hence it may behelpful to have them fixed while solving for distortion.

If a pre-existing DEM is available, the tool geodiff can be used tocompare it with what ASP is creating.

Such a pre-existing DEM can be used as a constraint when solving forintrinsics, as described in Section 8.2.1.





broken image