Tuesday, December 8, 2015

Special Topics and Open Source Food Deserts Report

      Welcome to the final post for the Special Topics in GIS course. This last portion is dedicated to the report on Food Deserts that has culminated the last few weeks of preparation and analysis. The overall objective for this project was to explore open source GIS software and web mapping applications to create a simple but custom web map to work in concert with standalone maps generated with other open source GIS software, specifically QGIS.
       The culmination of the analysis results is presented in the fully narrated power point presentation linked below. Additionally the independent web map also linked below is embedded into the presentation. For those who do not want to download the full powerpoint as its slightly large (25mb) the result map is below for your convenience.

A Growing Trend: Food Deserts PowerPoint Report

Palm Springs Food Desert Interactive Web Map

        A brief overview: A food desert is an area in or around a city or urban center that is outside of 1 mile from a grocery store that doesn't offer fresh produce or other whole foods. The lack of these fresh foods can led to compounding health issues for those who might not have the means to travel farther to grocery stores with them and are stuck with local potentially unhealthy options. My project involved investigating the city of Palm Springs California for the presence of Food Deserts. The data was investigated at the census tract level, looking at the population information from the 2010 census. The center of the census tracts was evaluated to determine those outside of 1 mile from the grocery stores present. These food deserts were then color coded by population distribution highlighting those most impacted. The interactive web map supplements this information by allowing you to a better visual sense of the neighborhood distribution for the area in relation to the census tracts.


    Overall the analysis resulted in 51% of the population being impacted by food desert conditions. This definitely highlights that there is a problem with food deserts in the area. This data can be used to help inform city planners for future developments of grocery stores or healthy food alternatives closer to impacted neighborhoods. This education about food deserts as followed through the last few weeks is the biggest way to help combat the spread of food deserts, by making the public informed.
    All of this presentation was created utilizing open source information and software (minus a couple cheating moments of using ArcGIS). Open source products are hugely beneficial and with a little bit of time can be employed by just about anyone. This is a wonderful supplement of tools for the masses who might not have access to the more proprietary spatial analysis tools. Thank you for joining me on this journey and I look forward to you following along as I embark on a GIS internship.

Wednesday, December 2, 2015

Special Topics, Food Deserts and Map Box

Welcome to the continued look at open source GIS with a flavor for fresh food, or lack thereof as is the case for areas identified as food deserts. This is a continuation of the analysis started last week. This week was dedicated to the following objectives:
  • Tile Shapefiles for internet usage
  • Create a custom basemap with the use of Mapbox
  • Utilize web mapping to communicate a subject
    Essentially last week you saw some analysis with premade subject matter for the Pensacola area of Escambia County Florida. Those datasets were already present and being manipulated to discover how to determine areas that are or arent food deserts. This week is about taking my own acquired and generated data sets and feeding them through the processes of making an open source web map. The mediums for this are QGIS, Tilemill, and Mapbox. Mapbox is the new addition to the processing and analysis portions of this week. Specifically tiled layers generated in Tilemill were uploaded to a base map in mapbox. You can see that below.


     This is a very rough product just highlighting that I could take a basemap which is the satellite imagery and add the layers over top, showing grocery stores (red dots) and various effects of food desert census tracts. The visible tracts are food deserts, all other areas should be considered not food desert. These deserts are then color coded by population affected, the darker the more people impacted. Once again this is a rough overview of a capability of adding these in Mapbox. The stylized finish product coming next week will involve much more functionality as it transitions to a full functioning web map through the use of Leaflet. This will be like last weeks map example, only tailored with my personal study area and information. So you might ask what has been done so far?

    My data is for the Palm Springs city area as defined by the US Census Places layer which is a subset of Riverside County California. My boundary files for this area were all derived from the US Census acquired partnership files. The layers obtained include state outline, Riverside County Boundary, Places within Riverside county, and census tracts within riverside county, to include the US Census population data for 2010. The palm springs boundary was clipped from all of the places within Riverside county, then used as the study area to clip the appropriate study area census tracts. Tabular census data was joined to the census tracts by tract ID.
    Locating the pertinent grocery stores was a combination of Google Earth searching, Google searching, and Yellowpages.com to cross reference all available grocery stores including fresh produce and the like. There were 9 of these points that were marked within Google Earth. The KML file was then exported from Google Earth, and within ArcMAP the KML to Layer tool was used. From there I had a problem uploading this point file into Tilemill and having it display appropriately. My workaround was to create a blank point feature class and heads up digitize new points over the existing point using the snap to point feature to ensure that the points were appropriately overlayed. This newly created layer was effectively useable with tilemill for the creation of the mapbox tiles.
     This data is already starting to show trends and pertinent information for the area, but I will save those results for next weeks post. Please stay tuned and thank you.

Thursday, November 19, 2015

Special Topics; Food Deserts and Open Source Webmapping

Greetings,
     Welcome to the continuation of the Special Topics look at food deserts. This is the first of two weeks of analysis type work. Although this week more effort went into analyzing how the data would be presented in a few weeks by looking into web mapping applications built around presenting maps to the public in an open source manner. Last week we started the look at open source, ie Free ways to work with GIS data such as QGIS. This week we are transforming some of that prep work into the web forum in preparation of distributing it to the masses. The overall objectives of this week are:

  • Navigate through, and add layers to Tilemill
  • Gain familiarity with Leaflet
  • Use tiled layers and plug-ins in a Web map. 
   So what are the programs that these objectives mention?  The main theme to note with all of them before looking at them individually is that they are OPENSOURCE! FREE. That means anyone who wants can acquire them, learn about them, and in most cases contribute to the community with them.
   Tilemill is an interactive mapping software predominately used by cartographers and journalists to create interactive maps for sharing with the public.
   Leaflet is a javascripting utility which allows you to code html web maps for display, much like the one linked below.
   The layer tiling mentioned in the last objective was accomplished with some basic html code on notepad, and shared on a webmapping host.

   This http://students.uwf.edu/bd26/STGIS//EscWebMap.html is the link to the culmination of this weeks efforts. It combines the objectives mentioned above with the data we looked at last week for food deserts in the Pensacola Fl, area. Every feature or option on this map falls into one of the objectives above. Note the layers that are turn on- off-able in the upper right, or the find function in the lower left. The points, polygon and circle also are very specific. Each of these elements is an individual block or segment of code which was pre-thought out to contribute to the map in this specific manner. This was all done to get familiar with these applications and get ready to present my own specific area exploring food deserts in a couple weeks, not just the Pensacola information here. Stay tuned as I continue to work toward open source processing and food desert analysis. Thank you.

Saturday, November 14, 2015

Special Topics, Food Deserts and QGIS

      Welcome to the beginning of the last multi-week module in Special Topics in GIS. The focus for the rest of the course is two fold. The focus topic for the remainder of the class is on Food Deserts and their increasing proliferation due to urbanization and expansion of the markets / grocers containing wholesome and nutritious foods to include fresh vegetables and fruits and other produce. This second large aspect of this project is that all preparation, analysis, and reporting for the focus area will be done using open source software. As the certificate program as a whole draws to a close it is a good introduction into what is available outside of ESRI's ArcGIS suite of applications. This week I specifically used Quantum GIS (QGIS) to build the base map and do the initial processing of Food Desert data for the Pensacola area of Escambia County, Florida. The overall objectives going into this week are below:

  • Perform basic navigation through QGIS
  • Learn about the differences of data processing with multiple data sets and geoprocessing tools in QGIS, while employing multiple data frames and similar functionality.
  • Experience the differences of map creation with the QGIS specific Print Composer

 Here is a map not unlike many of the others that I have created using ESRI's ArcMAP. That is in fact the point of one huge aspect of this project. There is open source, defined as free to use software which you can personally suggest improvements for update and redistribution to the masses, applications which perform quite similar tasks and produce similar outputs. QGIS is one of these options. Given the background in ArcGIS from the rest of this certificate program there is not as steep a learning curve in picking up QGIS and running with it. There are definitely differences, but with little instruction it becomes fairly intuitive just like ArcMAP. Now you might ask why, if this thing is so similar wouldnt everybody choose it over ESRI products? There is still advanced tools and spatial analysis functions in ArcGIS that are beyond this software. Not everything is wholly interoperable. So for the basic to moderate tasks, absolutely they can be done in QGIS. But sometimes there will be no substitute for the processing ease and power of ArcGIS.
Back to this specific map, what you're looking at is two frames or two sides of the same information. You are presented with both Food Deserts and Food Oasis by census tract for the Pensacola area of Escambia County. These deserts were calculated by comparing the centroid (geographic center) of a census tract with its distance to a grocery store. Tracts without a grocery store providing fresh produce are said to be in a Food Desert. The average person in these areas has to travel farther to obtain fruits and vegetables and the like. When so doing other closer, less healthy alternatives might be taking precedence for these people. Ultimately those with less access are likely to be less healthy overall and that is the issue we are starting to get into with this subject. Keep checking for the next installments of analysis as we continue to look at this problem. The area above is just an example, as the project moves along my analysis will focus on Palm Springs California. Thank you.

Sunday, November 8, 2015

Remote Sensing and Supervised Classification

This weeks focus is on supervised classification, particularly with the Maximum Likelihood method. This is a continuation of last week that started the classification discussion with unsupervised classification. The exercise and assignment culminated in the map below revolved around imagery provided by UWF with the following objectives: 
    • Create spectral signatures and AOI features
    • Produce classified images from satellite data
    • Recognize and eliminate spectral confusion between spectral signatures
       Supervised classification revolves around the creation of training sites to train the software in what to look for when conducting the classification. This is accomplished by creating a polygon type area of spectrally similar pixels. Examples would be dense forest, grassland, or water. Each area has a distinct spectral signature. These signatures are used to evaluate the whole of the image and allow the software to automatically reclassify all matching spectral areas. The overall process is usually in 4 steps, get your image, establish spectral signatures, run the classification based on the signatures, then reclassify or identify rather what your class schema is.

      The process is fairly straightforward with only a couple things to watch out for when establishing the spectral signatures. Being sure to avoid spectral confusion. This is where multiple features exhibit similar spectral signatures. This usually occurs most frequently in the visual bands, and can be avoided by doing a good check using tools such as band histograms or spectral mean plotting which shows you the mean spectral value of one or more bands simultaneously. We can see some of the results of this below, such as the merging of the urban / residential and the roads/urban mix.

      This is a Land Use derivative for Germantown Maryland. It was created using a base image and supervised classification looking for the categories displayed in the legend. This map shows the acreage of areas as they currently exist and is intended to provide a baseline for change. As areas get developed the same techniques can be used on more and more current imagery to map the change and gauge which land uses are expanding / shrinking most and by how much.
      This was an excellent introduction to one method of supervised classification, there are many other types and reasons to conduct it, but those are for another class. Thank you.

      Thursday, November 5, 2015

      Special Topics and Meth and Analyzation

              Welcome to the continuation of our look at statistical analysis with ArcMAP. Recall that the theme being explored with statistics is methamphetamine lab busts around Charleston West Virginia. These past two weeks of analysis have been the bulk of the work for this project. The overall objectives of the analysis portion of this project are to review and understand regression analysis basics, and a couple key techniques. Define what the dependent and independent variables for the study are as they apply to the regression analysis. Perform (multiple renditions) of an Ordinary Least Squares (OLS) regression model. Finally, complete 6 statistical sanity checks based on the OLS model outcomes.
             In the previous post we looked at a big overview of the area that is being analyzed. There are 54 lab busts from the 2004-2008 time frame from the DEA's National Clandestine Laboratory data. Decennial census data from 2000 and 2010 at the census tract level was spatially joined to these 54 lab busts. The data was then normalized into a percentage by census tract into 31 categories for analysis in the OLS model. These 31 categories of data were then fed into the model and systematically removed while analyzing their affect on the model. Ultimately as good a model as possible was arrived at with some results below.

              This is an extract of the table that I put together from the ArcMAP generated output to depict the OLS results. This was a cleaner format than the straight screen shot because it incorporates the descriptions of the individually labeled data elements. Key things to note for the table are that there are now only 11 variables being incorporated into the OLS model of the original 31. How were variables removed you might ask? There are six checks or questions to answer to determine the validity of a variables use in the OLS model: does an independent variable help or hurt the model; is its relationship to the dependent variable as expected; are there redundant explanatory variables; is the model biased; are there variables missing or unexplained residuals; how well does the model predict the dependent variable? The first three of these were generally grouped into 1 solid check for determining if a variable should stay or go. The remaining checks were applied to the model results as a whole. The key attributes to look at for a variable fall in line with [a], [b], [c],  as depicted on the table. As long as you had a coefficient not near zero, probability lower than .4, and VIF less than 7.5 a variable could stay. Not all of these match this criteria now, however you have to look at model functionality as a whole. The R-Squared [d] value is right at .7 (rounded up) which means that the model as it is accounts for 70% of the meth labs location based on the variables in use. This is pretty good when working with sociological data of this type. After looking at this data table its time to transition to the visual interpretation seen below.



           This map depicts the standard residual for the OLS model depicted in the table. It symbolizes areas using a standard deviation style outlook. However rather than wanting a more Gaussian curve style of data showing some of every color you ideally want values to be in the -0.5 to +0.5 range because that is said to be highly accurate. Darker browns indicate areas that the model predicted less meth labs than there actually were, and darker blues indicate high value areas where the model expected more meth labs than were actually present. Remember though that from our table above we are only doing a good job of predicting 70% of the total meth labs and the majority of the study area is still within 1 standard deviation.
          This weeks focus was not to describe the data results, but to accomplish the analysis leading up to it. Please follow up next week for a look at the finalized product. Thank you.

      Friday, October 30, 2015

      Remote Sensing: Unsupervised Classification

      Welcome to this weeks discussion on unsupervised classification with remotely sensed imagery. This is a multifaceted lab looking at a number of different processes culminating in the unsupervised classification and manual reclassifying of the resulting raster dataset for a permeability analysis. The main objective was to understand and perform an unsupervised classification in both ArcMAP and ERDAS Imagine. Imagery for two different areas was provided, ultimately the UWF area as seen in the map below was the final subject matter for exploration of these topics.
      Unsupervised classification is a classification method such that a software suite utilizes an algorithm to determine which pixels in the raster image are most like other pixels throughout the image and groups them accordingly. After the software has grouped the various pixels together it is up to the user to define what the grouped classes represent. For this type of classification the software is given certain user defined parameters such as number of iterations to run, confidence or threshold percentage to reach, and sample sizes. These essentially tell the software how long to run, what the minimum "correctly grouped" pixel percentage is, and how many pixels to look at adjusting at a time.  Lets look at how this was applied this week.


      A high definition true color image of the UWF campus was used for the above analysis. This entailed performing a clustering algorithm on the true color image to group like pixels together and then export them as a slightly less defined image for storage space and processing speed concerns. The clustering algorithm created 50 classes, or shades of pixels which approximated the true color image. That was essentially the unsupervised classification. The software was told to produce 50 classes with 95% accuracy overall. Then I manually reclassified each of those 50 original classes into one of 5 labeled classes. I accomplished this by highlighting the pixel shade and reviewing it against the true color image and assigning it to the classes described. Four of the five classes are straight forward and represent what they say, with some possible error. The mixed class however is there because certain pixel shades applied to different items that represented both permeable or impermeable surfaces. For example some dead grass showing could show a tan pixel while a tan rooftop could also be showing the same value. So recoding this pixel to be grass or buildings would be wrong for at least some of that cluster of pixels. To account for this the mixed class was created, which is why you can see some rooftops as blue, grassy areas as blue or green and some blue sprinkled throughout.
      Overall this is a fairly course analysis, but it does do a great job of exercising the process and creating likely results. Next week we will look at supervised classification in case that question came to mind. Thank you.

      Sunday, October 25, 2015

      Remote Sensing and Thermal Analysis

      Hello and welcome to this week in Remote Sensing where ill be discussing this weeks look at multi-spectral image analysis specifically looking for trends in thermal or infrared energy. This week was designed to focus on being able to compose a series of different raster bands into a composite image in both ERDAS Image and ArcMAP. Then having a multi-band composite image being able to manipulate that bands being displayed by color, and interpreting the resulting image. A couple different images were provided by UWF to exercise these skills and to ultimately come up with a user derived analysis of some particular feature. Lets look at my composite map and then we will discuss how I met the objectives above.



      This is a thermal overview of Florida's emerald coast. The image is as of February 2011 provided by UWF. I have made a composite of the original 7 bands of information available. The main map and two insets are all the same image with different band combinations or visualizations of the available data. The central feature to all three images is a large oblong clearing. What is this clearing? It is one of the many available military firing ranges located along the panhandle. What am I trying to do with it. Overall im trying to differentiate this area from its surroundings using thermal imagery. The purple image comes from a unique combination of infrared both short wave and thermal bands to provide brightness to the "hottest" areas. These are those areas that heat up and or emit the best. You can see that there is a very similar spectral signature all along the island to the south. Santa Rosa island is made up of white sand beaches and dunes and appears as the only feature that might be spectrally similar to the artillery ranges. The color inset is another infrared look at the area, but rather than a grayscale color has been used to help give characteristic spectral pattern to the other images. Additionally for reference you can see that when viewed in true color that ranges do stand out fairly easily, and it was this resolution and clarity of image that I wanted to have replicated using an all infrared color image. I think it worked well. What about you? Ultimately this banding combination is great at identifying land clearings specifically, but you can also see a good range of vegetation density and land water contrast even if that wasnt my original focus. Thank you.

      Wednesday, October 21, 2015

      Special Topics and Statistics and Meth Labs

                      Hello and welcome back to my blog. We are beginning a fresh topic for the net few weeks worth of assignments and posts. This project will have us delving deep into the clandestine, the dangerous, and ultimately bad world of drugs. Specifically we will be examining the role of GIS statistical analysis as it applies to aiding law enforcement with determining ideal locations to find methamphetamine labs. Methamphetamine's have been around since the early 1920's, and illegal since the 60's which drove the illicit trade underground. Meth labs have been found in every state, but surprisingly only in about half the countries counties. This leaves a huge disparity in the national / state level problem and the county level. Over the next few weeks I will be analyzing tow particular counties of West Virginia, Putnam and Kanawha. These counties are credited with 187 meth lab busts from 2004-2008. The majority of which come after 2005 which saw the introduction of the Combat Methamphetamine Epidemic Act of 2005 aimed at eliminating the over the counter acquisition of pills which could be distilled into meth generating substances. Once again, the overall goal is to explore the uses of GIS in this incredibly relevant topic to the nation. Chances are we all know someone who has been impacted through drugs or drug use, or at minimum you can see it all too prevalent in the news. The idea behind this lab is to examine the socioeconomic  trend information that can aid in determining where meth labs are most likely to be to give the information to law enforcement. The end deliverable for this will be a scientific paper discussing the issue and analysis being done on the below study area. 


      As stated earlier the study area in the Charleston area of West Virginia was home to 187 meth lab busts. This information has already been summarily broken down into a meth lab density by census tract in the above main map. This essentially means the total number of busts per census tract was divided by the area of the tract to give us the density values seen in the legend. This map also provides a basic overview of the subject counties and provides state context as well. Additionally an extra tidbit was added in the cities that are displayed, you can see that throughout this study area there are only 4 named cities that house over 10,000 people apiece. This may or may not factor into the study. Time will tell. 



      I will leave you with this:


      Sunday, October 18, 2015

      Remote Sensing and Multispectral Analysis

            Welcome to this weeks remote sensing topic revolving around multi-spectral analysis through spectral enhancement. Essentially this means to take existing spectral data and present it in a manner that might bring out certain relationships or patterns not readily present in other presentations. This weeks objective was to study an image set and identify certain spectral relationships that aren't readily seen looking at a standard true color image. This is done by manipulating that pixel values to show other relationships through gray scale panchromatic views of single spectral bands or different combinations of multiple bands such as that seen from a standard false color infrared image. Both ERDAS Imagine and ArcGIS were used to explore the given image. Several tools within ERDAS were used, such as the Inquire cursor to look at particular groups of pixels for there relevant brightness information. Histograms and contrast information were used to identify patterns within multi-spectral and panchromatic views of one or more spectral bands. The image manipulated below was provided by UWF, and the assignment was to identify three different sets of unique spectral characteristics present within the image and to build maps around those.

      The first criteria involved locating the feature in spectral band 4 that correlates to a histogram spike in value between 12 and 18.


      As stated above, the first criteria involves investigation specifically into band 4 which is generally associated with near infrared (NIR) energy and is good for looking at vegetation and soil and crop land and water contrasting. With this task I needed to look at the histogram and find the resulting spike seen in the lower right inset of the histogram in question. From there I specifcally made this the only "visible" feature in the map to the left of it. You can see that in both the standalone feature map and the gray scale band 4 that the water does stand out quite significantly.

      The second criteria involved locating the feature that represents both a spike in the visual and NIR bands with a value around 200, and a large spike in the infrared layers of bands 5 and 6 around pixel values 9 to 11.






      The main features of this image are displayed in a false natural color employing bands 5,4,3 in that order for red, green, and blue. This color combo does particularly well at letting the areas that are being inquired about be displayed You can see that the insets display different extents of the same data in different spectral scenes. The two separate breakdowns of pixels of value 200 in Visual bands and values 9-11 in the infrared bands are compared in the lower right. In the visual inset in the lower left this appears to be snow within some mountainous valleys.

      The last criteria being looked for revolves around water features that when looking at bands 1-3 become brighter than usual, but remain relatively constant in bands 5 and 6.







      A true or natural color image is opposite a custom arrangement of bands 6 displaying band 7 short wave IR, and green and blue bands showing their own respective colors for enhancement. If you look at the natural color image you can see a river in the upper right portion that is much darker than the water way that is featured in the other images. This inlet is then highlighted spectrally in the other images. Focusing on the green and blue and removing the red, allowing for pinks and reds to be limited to open ground you get a much better highlight of the likely shallower inlet area. You can see also that false color IR does highlight the bay only by contrasting it against the vegetation which stands out so well.

      Wrapping it up this assignment was about taking one image which is a combination of bands of spectral information and organizing that information in different ways to make certain features more readily apparent over others. Thank you.

      Monday, October 12, 2015

      Remote Sensing: Preprocessing and Enhancement

      Welcome to this weeks remote sensing topic of image pre-processing and spatial enhancement. This week focuses on acquiring remotely sensed information, utilizing programs like ERDAS Imagine or ArcGIS to modify the image to make its information more readily apparent then what might be available at first glance. This is known as spatial enhancement. To facilitate this weeks objectives of utilizing various filters and image transformations I was supplied with a 2003 Landsat 7 image which had some serious stripping effects due to a Scan Line Corrector error. Exploration using low pass, high pass, and Fourier Transformation led up to the actual work on the below map. Low pass enhancements essentially apply a mean kernel filter to smooth the detailed areas letting larger similarities show through. The opposite of this is a High pass filter which enhances the small details like road patterns. The Fourier Transformation was used as the basis of the processing of the below image which was used to reduce the stripping present in the lower inset.


      An inset of the initial image is shown in the lower left with the culminated result of ERDAS Imagine and ArcMAP display in the large screen. Stripping is still prevalent but the result is much cleaner than the original. The information on the map details that a Wedge and Low pass transformations were key to the initial processing, followed by stretched symbology adjustments focusing on histogram display by standard deviation.
      All other courses throughout this program so far have given already processed or corrected imagery to this point. This is an excellent intro to what to do to imagery that is not already at usable quality. Stay tuned as this concept is broadened throughout the coming weeks.

      Saturday, October 10, 2015

      Special Topics and Mountaintop Removal Report

      The past two posts for Special Topics have focused on the preparation and analysis of Mountaintop Removal (MTR) areas located around the Appalachian Mountain chain in the mid East United States. The analysis culminated in the report here for the Group 3 area. The overall purpose of the report is too compare 2005 MTR measurements provided by Skytruth.org to an internal study by UWF of 2010 available imagery.
      The last post outlined the reclassification of the Landsat rasters for a particular area and here we are looking at the fruits of that labor. The group 3 analysis team, lead and coordinated by myself combined data for two Landsat areas and then eliminated areas of noise and interference. The noise and interference refer to areas that show similar spectral signatures to MTR sites. River erosion, urbanization, agricultural clearing, and roadways can all show similar reflectance to MTR sights. As such these areas were attempted to be removed by discounting areas within 400m of rivers, 400m of highways, and 50m of small streams and roads. One key area of interference remained afterward which was clouds on the imagery. When reclassified some of these areas could not be cleared away and left some false positive MTR sites. These have been highlighted on the map below, and likely account for the inaccuracy described on the map.


      The map above is based around the two combined Landsat Raster images which were used to derive the MTR sites through unsupervised reclassification during the analysis phase. These are overlayed on the 2005 identified sites. With the exception of the areas in the north west explained on the image the data supports that there is an overall decline in MTR activity. This is likely as a result of being on the western edge of the study area as whole with MTR progressively moving eastward into more rich regions of the US coal fields. These fields arent depicted here. However this blog entry, and associated map are only one part of the project as a whole. There is other analysis and conclusions drawn as a part of this project at the following link: http://arcg.is/1Yxokr7 . This is an ArcGIS Online Story Map Journal dedicated to this project. Overlayed there with this data on the analysis tab is a depiction of the coal fields. I invite you to look at the more indepth discussion on this analysis and MTR activity through the jounal. Thank you.

      Monday, September 28, 2015

      Special Topics and Mountain Top Removal Analysis

           Mountaintop Removal (MTR) analysis is this weeks theme continuing the Special Topics preparation done last week. The main objective of this week was to create a reclassified raster using a combination of ERDAS Imagine and ArcMAP ultimately depicting user defined areas of suspected MTR. Recall that the overall study area was divided into groups. I am a part of group three and you can see the overview of my working area in last weeks post. The area that my group has been assigned is broken down into two landsat rasters. I took one raster as my portion of the assigned work while other members worked with the same or other raster as well. The key element to all of the calculations this week was in visually determining evidence of mountain top removal through unsupervised image classification and reassigning these elements a new value.  


      This is a screen shot of ongoing work in ArcMAP of the reclassified areas as MTR. The red depicts areas that have been preliminary classified as MTR. Note there are two bulk areas classified as such, those in the lower right appear darkest and are more legitimate examples of MTR. The area to the north west on visual inspection (imagery not available here) more closely resembles an urban build up. Note that the process here was to assign all pixels in the original raster to 1 of 50 different color shades. Then from the color shades taking what stands out as MTR and changing it to red. This change impacts all other pixels of that color shade on the image. So imaging the clearing of a field for planting agriculture, or the build up of an urban area, its easy to see how many of those properties would be similar to land being cleared for mining. This is just one example of how we can end up with other areas being reclassified similarly to actual sites of MTR. The analysis and composition of results will continue next week as we take out some of the key factors from the results above. If I remove areas within proximity to developed roadways in the town in the upper portion of the image likely very few areas of MTR will remain there. That and further work will be done and presented next week. Please stay tuned!

      Saturday, September 26, 2015

      Remote Sensing: An Intro to ERDAS

           ERDAS Imagine is an excellent software suite designed to stand alone or provide interoperability with other spatial analysis software such as ESRI's ArcGIS. This weeks look into remote sensing has dealt with two things; a good look at the spectrum of Electromagnetic Radiation (EMR), and an introduction into ERDAS Imagine through some image files. The overall goals here were to understand some of the key concepts about EMR to put more substance behind the images we have looked at the past few weeks, and to look further into how we can manipulate the information in these images.

                            Lets first get a glimpse of the Electromagnetic spectrum to start things off.


            This is the spectrum looking at long waves on the right and short waves on the left and watching their transition from one side to the other. Happily we have our nice visible portion there in the center. What you need to understand here is that everything in our world is an emitter, reflecter, or absorber of energy falling somewhere on this spectrum. There are also things like the sun which emit energy in a continuous amount of energy across the entirety of the spectrum. All of the EMR sent out by the sun then interacts with our atmosphere or things it comes in contact with which it either reflects off of, is absorbed by, or is refracted from. Remote sensing cameras then get to absorb some of this EMR for us to view. From there we then get to manipulate the imagery we get into usable products like the below.


           This is a very simplistic map which was built as an example of some basic processing from ERDAS Imagine. The image itself was first used in ERDAS to calculate the area of each category represented here. There are different classes of vegetation and land cover shown. You might also notice than when the remote sensing platform originally took the image there was something else we couldnt classify as ground cover... Clouds. Clouds occupy 507 acres of the image. from ERDAS Imagine the image was essentially exported for refining in ArcMAP. This was a good and simple intro into a program that will be used likely for more and more complex tasks throughout the remainder of the class.

      Monday, September 21, 2015

      Special Applications and Mountaintop Removal Preparation

             Have you ever lost a mountain? How could you lose a mountain if you wanted to? Well the next few weeks will be spent looking at Special Applications in GIS that can be used to analyze mountain top removal. Mountaintop removal and valley filling are a common practice most particularly dealing with coal mining. The Appalachian Mountain chain in the mid eastern United States is an area that is particularly afflicted with this form of mining. The mining essentially involves peeling away the surface of the earth including, trees, brush, soil to get at the rocky layer beneath to harvest away the precious precious coal. This is the premise of the project encompassing the next few weeks posts. This weeks objective is to skim the surface of what mountain top removal is, means and does. Then create a basemap for the study area that I will be exploring for the next few weeks. This project has both an individual and group component. Deliverable's like the map below still have to be done independently but much of the upcoming analysis will be broken down into manageable chunks to be done in groups with a final group presentation project a couple posts from now.


            The basemap above gives an overview of the study area that is being broken down by group and a look at the chunk that belongs with the group to which I am a part. Many things have been done to what was originally a DEM layer to show the Elevation, Streams, and Basins depicted in the group sliver of the study area. Essentially a mosaic raster was made out of 4 DEM sections which were then paired down to their extent that falls within the study area. From there multiple tools were applied to the mosaic to generate the streams and basins. Essentially holes in the pixel database had to be filled with a fill tool, this makes it so when running a subsequent flow analysis there arent holes for the "flowing water" to go into. Flow direction is applied to see how and where water would or should move given the overall contours of the elevation slopes. From there a calculation is ran to determine what actually correlates to a running stream. This calculation funnels into a conditional statement tool identifying areas that should be streams. And last from there a feature class is created from that entire process and then displayed appropriately.
            What does this have to do with coal? Well one of the big problem sets associated with this subject matter is in determining how much land are we losing once an area is subjected to MTR? If this is a good before picture we can then take an after or during picture and calculate the difference. Overall you should keep checking back to see where we go with the data at hand.
            In the mean time you might find these resources interesting. The first is a Story Map which displays the 6 stages of Mountaintop Removal for your perusal. This next Journal Map is the building blocks in progress toward a final compilation for the project to be finalized in the next couple weeks. Thank you.

      Saturday, September 19, 2015

      Remote Sensing: Classification Accuracy

      Welcome to a continuation of last weeks look at Land Use / Land Cover.
      This week I am specifically looking at the classification accuracy of last weeks generated Land Use Land Cover Example map of Pascagoula Mississippi. The overall objectives of this week were to explore the different types of accuracy and how to compare them with the data presented. This builds on last weeks map and is based on taking samples at locations of each classification type and determining if the sample is accurately portrayed by the assigned classification. these samples are typically generated in situ meaning at the physical location being examined, or ex situ using some other external means such as higher resolution imagery. All of the points on the map below were looked at in an ex situ manner using Google Maps satellite view. There are three different types of accuracy that can be calculated utilizing the sample points. These are the overall accuracy which is the totally correctly classified sample points, users accuracy which is the probability that a sample point is actually of the appropriate classification. If this is not the case it generates an error of commission which is when a point is committed to an incorrect class. And the last error is the producers error, which is the overall probability that any location on the map is correctly classified.  Lets look at the example map.


            There are 35 points on the map with a total accuracy of 66%. 12 of the points have been miss-classified. These points are in red. Correctly classified points are in green. A random stratified approach was taken to selecting sample point locations. Each category has a minimum of one point with categories that cover much larger areas having more sample points allocated. None more than 5 points total for proportionality. You can start to see themes on which points are inappropriately classified. For example, the light purple class 61 was determined to be entirely false throughout. Upon closer imagery inspection this region should all be classified as a 62, essentially going from forested wetland to emergent wetland.
             Swapping trains of thought and looking at user accuracy we have a couple examples where categories 11 and 12 both had 4/5 (80%) user accuracy rating. However 12 did have 3 points that should have been classified to it. This means there was a possible total of 8 sites, 4 classified appropriately 1 not and 3 discovered during accuracy testing totaling 50% classification accuracy.  This lab was a good look at the kinds of errors that can be made when classifying areas for land use and land cover and how to start avoiding them. Thank you.