This project is read-only.

DotSpatial right for a track viewer?

Dec 15, 2010 at 8:07 PM

First, thanks to this community for making DotSpatial available.

I'd love to discover that the framework will provide for my needs at work, but before I devote a lot of time trying it out, I was hoping to ask some specific questions here regarding issues I've run into in the past with other frameworks/GIS efforts in .NET.  I'm not hoping for a lot of detail, just some basic info about whether DotSpatial can handle these issues, is it complicated, has anyone had experience with these, etc.

By the way, the app I would potentially build has to be a stand-alone app, not depending--for example--on any networked server for map or shape data.

1) Deep-level zooming and panning.  I need to build a map that will let a user zoom from the entire extent of the earth to an extent of a few meters.  At any zoom level, the map should provide smooth panning.  I've had issues using transformations in .NET such that if you are not centered near the origin of a view and are highly zoomed/scaled, the rendering gets messed up and panning becomes very quantized and/or erratic.  Then, of course, there’s map tiling...

2) Viewing portions of large polygons/shapes.  When a user is highly zoomed in, I want to show him very detailed outlines of borders/boundaries.  Is DotSpatial efficient when dealing with large, detailed shape files?  This is especially interesting if the view is zoomed in to see only a relatively small part of the shape that, conceptually, should be easily viewed and panned (except for the fact that the “unseen” portion of the shape file is still huge).

3) Dealing with lots of tracks.  In my work, I would potentially be loading in thousands of “tracks” (which are basically shapes with dynamic position information).  A symbol for a track may consist of a polygon with up to a few dozen points.  Can you imagine a DotSpatial map displaying all that data with smooth panning, smooth zooming, and the ability to change positions on all those symbols at least once a second?

4) WPF integration.  Can you place WPF shapes on a DotSpacial map?  This may not be necessary depending on what you can do with native symbols on a DotSpacial map, but I recently started using WPF path objects to handle drawing symbols, displaying labels, etc.  

5) Mouse-point hit tests.  Does DotSpacial provide good “hit testing” to determine what items are within some range of a click?

 

Thanks so much for any information anyone can provide.

Dec 15, 2010 at 8:31 PM

I’ll address #2…

We are not yet there on this one, but it is something we are working on and have built many of the low-level components we need to do it. This is a high priority requirement for the application I’m working on, but I can’t do it alone. Ted has been helping a lot with this. The main thing we are lacking right now are the Map Layers that will incorporate the low level components for reading shapes from large shapefiles. Currently, there is not a serialized spatial indexing scheme, so it must do a linear search of the shape ranges when doing a spatial query. But the performance is pretty good IMHO. But we will eventually need to have serialized spatial indexing.

Kyle

From: ftlPhysicsGuy [email removed]
Sent: Wednesday, December 15, 2010 2:08 PM
To: kellison@geocue.com
Subject: DotSpatial right for a track viewer? [DotSpatial:238511]

From: ftlPhysicsGuy

First, thanks to this community for making DotSpatial available.

I'd love to discover that the framework will provide for my needs at work, but before I devote a lot of time trying it out, I was hoping to ask some specific questions here regarding issues I've run into in the past with other frameworks/GIS efforts in .NET. I'm not hoping for a lot of detail, just some basic info about whether DotSpatial can handle these issues, is it complicated, has anyone had experience with these, etc.

By the way, the app I would potentially build has to be a stand-alone app, not depending--for example--on any networked server for map or shape data.

1) Deep-level zooming and panning. I need to build a map that will let a user zoom from the entire extent of the earth to an extent of a few meters. At any zoom level, the map should provide smooth panning. I've had issues using transformations in .NET such that if you are not centered near the origin of a view and are highly zoomed/scaled, the rendering gets messed up and panning becomes very quantized and/or erratic. Then, of course, there’s map tiling...

2) Viewing portions of large polygons/shapes. When a user is highly zoomed in, I want to show him very detailed outlines of borders/boundaries. Is DotSpatial efficient when dealing with large, detailed shape files? This is especially interesting if the view is zoomed in to see only a relatively small part of the shape that, conceptually, should be easily viewed and panned (except for the fact that the “unseen” portion of the shape file is still huge).

3) Dealing with lots of tracks. In my work, I would potentially be loading in thousands of “tracks” (which are basically shapes with dynamic position information). A symbol for a track may consist of a polygon with up to a few dozen points. Can you imagine a DotSpatial map displaying all that data with smooth panning, smooth zooming, and the ability to change positions on all those symbols at least once a second?

4) WPF integration. Can you place WPF shapes on a DotSpacial map? This may not be necessary depending on what you can do with native symbols on a DotSpacial map, but I recently started using WPF path objects to handle drawing symbols, displaying labels, etc.

5) Mouse-point hit tests. Does DotSpacial provide good “hit testing” to determine what items are within some range of a click?

Thanks so much for any information anyone can provide.

Dec 15, 2010 at 8:59 PM
Edited Dec 15, 2010 at 9:28 PM

Kyle, Thanks for the info!  Out of curiosity, have you considered a tiling scheme?  I've seriously considered heading down that road on a current project, but every time I get close to moving from the drawing board to actual coding, I stop, think "someone must already have a solution to this," head to Google, and start searching.  That's how I came across DotSpatial BTW :)


-Jay

kellison wrote:

I’ll address #2…

We are not yet there on this one, but it is something we are working on and have built many of the low-level components we need to do it. This is a high priority requirement for the application I’m working on, but I can’t do it alone. Ted has been helping a lot with this. The main thing we are lacking right now are the Map Layers that will incorporate the low level components for reading shapes from large shapefiles. Currently, there is not a serialized spatial indexing scheme, so it must do a linear search of the shape ranges when doing a spatial query. But the performance is pretty good IMHO. But we will eventually need to have serialized spatial indexing.

 

Kyle

Dec 15, 2010 at 9:27 PM

1) We do not use the "transform" method on the Graphics object because it only supports floating point precision and has the problems you mentioned.  There are lots of issues if you are working with floating point values in a frame that is good for the whole world but has underflow problems when zoomed in.  We had the same major problems with using the "Transform" to do our coordinate controlling, as well as when we worked with a 3D viewer that used DirectX drawing.  We currently use our own transform code that translates from double precision coordinates directly into integer pixel coordinates and then builds the GraphicsPaths or other rendering items from the integer coordinates.  This has reasonably good performance as the translation code is much faster than the rendering code, so even if we end up translating a lot of shapes, as long as we don't have to render a lot of shapes it is fast.  We also trim duplicate points before sending it for rendering and don't even think we transform polygons that have an extent that does not intersect with the geographic view.

2) I think Kyle addressed this pretty well.  In truth, you have to be talking about some very large shapefiles before it will have problems.  My experience is that the rendering slows down noticeably with very large shapefiles that have all their shapes in view.  It draws much better once you zoom in a bit, but while everything is in view it is a little slow.  Kyle is correct in that at the upper extreme we can imagine a shapefile that is so large the vertices just won't fit into memory.  We have systems that prevent attributes from being loaded if they are not needed, (or paging through them so they are are needed) but we currently require the vertices to be in memory for our drawing layers.  Kyle and I are eventually looking to get that improved to support vector layers with no vector content, but we are worried about performance etc.  I for one plan on a Christmas break and don't plan on making many improvements until January.

3) Here is the thing, you will have to experiment a bit and find what works the best for you.  We support your standard layers that show up in the legend.  We support geographic "drawing" layers that don't appear in the legend and represents georeferenced content that can appear on top of your layers.  Both force a redraw of your big layers, which can be slow when zoomed all the way out.  What we have done to allow for sprites (or literally anything else you want to draw to be drawn) is provide access to the Paint event (or else the OnDraw method in custom MapFunctions).  This uses GDI+ drawing, so you will have to use a GraphicsPath and not a WPF shape.  The map supports PixeltoProj and ProjToPixel routines to do the translation into pixel coordinates.  Therefore, you can draw your custom content to the buffer in the Paint event handler on a much more frequent basis than what would happen if you tried to store your moving layers as ordinary geographic layers.  So in other words, when you change the map extent, it forces a redraw of the layers onto a hidden bitmap.  Map.Invalidate() only redarws that cached bitmap. Simply hiding portions of the control with another control and revealing them again are simply redrawing from the bitmap, so it is fast.  The more you load up into the custom paint handling, the slower it will get, but it will have only to do with your dynamic custom content that you are drawing all the time, and nothing to do with geographic layers.  You might even create your own transparent bitmap that gets updated as your tracks change, and invalidates the map and in the Paint event, draws your updated stencil.

4) No WPF.  I think there was an issue with having mono support for this when we first looked at it.  Not sure if this is still true.  Would love to use the new bitmap objects in the Media framework, but am not sure what mono can handle.  You can draw very complex or custom symbols.  We also support using scalable fonts as symbols, and the creation of a custom ISymbol where you control the drawing yourself using GDI+ graphics, or taking a bitmap for the symbol that you have drawn yourself etc.

5) Yes, but someone discovered a bug with this in fact in the case where you are not loading a shapefile from a file but have created a custom featureset using features.  In the second case it seems to be testing the extent instead of the full intersection, which I will fix tonight.  This should not be a factor by the time you are getting around to using it.  Depending on your performance needs, we have ShapeRange objects that can test for "Intersects" with a coordinate, and we have Topology classes which can do the same thing.  The topology classes are more OGC, meaning they follow the OGC convention, but seem to also be slower.  I'd recommending using them as little as possible and doing as much as possible with ShapeRange classes found in myFeatureSet.ShapeIndices.  Understand that, especially with large featuresets, if you touch the myFeatureSet.Features list, it generates a whole secondary set of features for tracking using feature objects instead of shapes.  These have more comprehensible editing capabilities, but take up a lot of memory and are slow.  If you can get away with using ShapeRanges I'd recommend it.

 

Dec 15, 2010 at 10:20 PM

Have not considered a tiling scheme.  I do not think tiling works very well with vector data.  I think an RTree, QuadTree, or whatever is more appropriate for vectors.  I've implemented such a tiling scheme in the past dealing with vectors, but I had everything cached in memory (I was after performance, willing to sacrifice memory).  Assuming the tiles you are referring to really just contain pointers to the vectors, it is doable, but is sensitive to different view scales.  You have to pick a scale at which the tiling is optimal.  Also, with the tiling scheme you end up with a lot of duplicate pointers since vectors will generally cross tile boundaries.

Kyle

Dec 15, 2010 at 11:25 PM

Tiling works better for the web server context, where basically the exact same symbology and layer ordering is used every time and not controlled by the public.  In fact, because we support rendering to a bitmaps, you could write your own script to create image tiles and then use them within DotSpatial as you describe.  I used BlueMarble tiles within an earlier DirectX version of the software, but those were more for image layers.  For the standard use case, this isn't really necessary.  For the desktop use case that I am thinking of, what we like most is that the symbology, layer ordering, zoom scale and so on are all dynamic and use comparatively small amounts of source data in order to generate and support a very wide array of physical designs and appearances.  Symbology based on attributes or that can be updated by the user is part of what makes GIS so powerful.  Generally we are interested in keeping this as versatile as possible for the most conventional use cases anyway.  There are currently many web server projects that are all about hosting tiled image views, like MapServer, OpenLayers, OpenStreetMap and other things.  I don't want to discourage you from using tiles with DotSpatial, but rather from thinking that strictly relying on tiles would be a better display model for our primary use case.  Also, I think there has been some recent contributions in the way of something called BruTileLayer.  I don't really know what that is, but it sounds something like a Bing Map WMS or something.  This might be better than relying on a monster shapefile for ordinary things like street maps, roads etc.  I haven't tried it myself yet, however, so no guarantees.

Ted

 

Dec 17, 2010 at 2:22 PM
shade1974 wrote:

...because we support rendering to a bitmaps...

Ted


This sounds interesting, I know you can render the map to a bitmap using the standard Control.DrawToBitmap(...) method on the Map control, but I guess you are probably referring to a better way of doing it?

Dec 17, 2010 at 3:18 PM

Ted,

Thanks for the info--I understand your points.  I was thinking a tiling method could speed up determination of which portion of a huge polygon (for example) to render--pre-processing the polygon into some type of tiling scheme so you can quickly determine which tiles to render as the user pans/zooms.  If that's not an issue for most cases, or if--as Kyle pointed out--another method is better with vector graphics, then obviously tiling is not a desirable solution.

Still thinking about the various issues, but I might take some time to play around with the framework over the holidays.  Nothing here has told me that DotSpatial won't work for me, and you never know until you get your hands a little dirty :)

I am a little disappointed that WPF isn't a consideration.  I understand the rationale, and I'm not necessarily the biggest WPF fan, but I have played around with it enough to see some benefits AND if MS decides it's the way of the future, then at some point it might be a good idea to just embrace it.  But ANYWAY...

 

Thanks again for all the replies.  I'll keep checking back in case other inputs come in.

 

-Jay

 

 

Dec 17, 2010 at 3:29 PM
Edited Dec 17, 2010 at 3:43 PM

 

Hehe... I just realized that the "tiling" scheme I had been thinking of using in my current project was actually a QuadTree--I just wasn't used to that term.  I was just seeing it as variable tiles, each of which could either hold a sub-piece of a polygon or four child-tiles (depending on how many vertexes were in the parent tile compared to your target maximum-per-tile).  Nice to see that there's a name for that  :)

 

-Jay

 

 

Dec 17, 2010 at 4:55 PM

Ah, yes that makes more sense.  I think ArcGIS uses a binary tree of some sort as well to allow elimination of a large number of elements at once.  I don't know that we really need it for when you are zoomed in.  The best thing I can think of now that would improve display performance for large vectors is to cache a bitmap "overview".  I think a lot of unnecessary and redundant drawing takes place for large vector datasets when zoomed to see the whole thing at once.  If we had a generalized vector preview then it would drastically improve the mechanical performance.  I don't even think we would need to add the complexity of a quad tree just yet.

Here is what it could do.  Log the pixel coordinates for the shape as it gets drawn to an image the size of a typical monitor.  Remove any vertices that map to the same pixel.  In the case of polygons, we need to keep a minimum of three vertices, linestrings, 2, even if they do map to the same place.  Then, we can consider the "coverage" of each shape.  If a drawing region for the new shape is already completely represented, then we don't need to draw it at all, so the whole shape is removed from the generalized model.  We would obviously only consider doing this test with very large polygon shapefiles.

As a stand in until I get back from the break, you guys can do this manually.  For instance, just because you have a "layer" sitting in memory does not mean that you have to have the same underlying FeatureSet at all times.  If you generated a low-definition preview and saved it separately, you could show the preview and then have different higher definition feature sets that turn on as you zoom in.  In practice most professional systems use powerful oracle driven database queries that would be hard to parallel with shapefiles, but a little data preparation ahead of time can go a long way to improving the performance for display purposes.  If you break down a big state-wide shapefile into counties, for instance, and then have the counties load dynamically based on the extents changing, you don't need me to write anything fancy and could get very good performance.  You could load content as a drawing layer, but then it would appear on top of your other vector sets, (or out of order) and that might not work well.  The better model would be to simply swap out the FeatureSet being drawn by an active layer and invalidate the MapFrame.

Ted