I have chosen Visual Studio as the environment in which we will create the new Avkat Web App. This app will collaborate with the work Daniel has done on ArcGIS online. I have embedded the map within my website, which was a fairly straightforward process. I have created a interface that allows for querying our tabular data on the database Avkat_mySql. Users will be able to chose content types of features, survey units, and ceramics. Users can specify a date range or Era which corresponds to each content type. When Users click the button “Execute Query” a table will be generated that includes all the needed attributes of their chosen content. The first two cells of each row contain an info and photo button. The info button will bring the user to a new tab that has a detailed listing of all attributes. The photo button will bring the user to a new tab that will display all photos of the chosen content.
We are currently creating the functionality behind the photo and info button. I am collaborating with Daniel to make this happen. The design of the current website is also under construction. We are focusing on the functionality of the web app first and foremost. Here is a quick look at what we have so far, with some queried results.
the last week I have spent numerous hours troubleshooting arcgis map services. I had created multiple map services to host in an arcgisonline map. These services each act as a single layer in my map. The layers are observation points, features, survey units, modern roads, modern towns, quickbird imagery, and a digital elevation model.
Two of the layers experienced problems with loading them into a web format. This was traced back to an ArcSDE problem. The features and survey units layers both required multiple related tables to be used. They also required a new related table, the photolog. This new table would allow us to choose a given feature F0002, and get back a list of the photo IDs that were taken at that specific feature.
I spent numerous hours on the phone with ESRI technical support and we discovered that the problem stemmed from our database having a capital A in its name, Avkat.SDE. Therefore I had to create a new database that I could host the problem data in. This was no easy task. As errors were appearing that the ESRI analyst said, “This error only happens in ArcGIS 9.1”. Needless to say I was in ArcGIS 9.3. After much effort, I finally got the features and survey units layers to work and they are up and running. However during a second technical support session, I learned that Arcgisonline does not support related table views in their popups. This is a problem, as we wanted to be able to click on an individual feature and see in the popup all the information from the feature table, but also pictures. We now need a new way to show the pictures for individual features. This will have to be handled by the query method.
We are now proceeding to add functionality to the query portion of the website. We are adding in a method to view a basic layout of the information queried as well as options for extra information and a button to view the photos.
Recently I have setup a map in Arcgisonline for eventual use embedded in the avkat website.
- The layers of this map are taken from the Mapserver definition file currently running on earth.cofc.edu:10080/avkat.html.
- I took the MapServer file that is being hosted on earth and split it into seven seperate MapServer files that are each running independently on earth.
- Then I imported them into a new map in Arcgisonline with their own observation pop-ups.
With the help of Norm, I imported the photolog to our Avkat geodatabase as an SDE geodatabase table. To begin working on bringing the photos into the map as part of each feature’s popup along with its other information.
- I then imported this table to the Survey Units MapServer file
- and the Features MapServer File.
- This photolog has a field FUSUID that shows either the FUID or the SUID where each picture was taken of the over 15,000 pictures.
- I then related the features layer to photolog on FUID and FUSUID.
- I then related the survey units layer to photolog on SUID and FUSUID.
- these relates will allow us to know exactly which photos go with every single fuid and suid.
The next step is to setup the photos on our server so that they can be accessed by our map.
- Originally we thought that we could create a Raster Catalog that would point to the directory where the photos are and that we would not need to actually move the photos.
- After we began the geoprocess we realized that creating a raster catalog and loading the photos would actually copy and paste the 40 gigabytes of photos. This was no good.
- We then realized that all we had to do was to create a simple website with a list of hyperlinks to each photo using our Inetpub/wwwroot directory.
- This gave me a lot more trouble that expected. I discovered that the standard way to create a html page has all links within the same directory as the html page in the wwwroot folder. I had something different, my pictures were on a network drive on an entirely seperate machine.
- I attempted to create a virtual directory using IIS 6.0 so that the hyperlink would assume that on my website, https://earth.cofc.edu/avkat_photos.default.htm there would be a way to go to https://earth.cofc.edu/avkat_photos/images/imagename to view a specific photo called imagename. This “subdirectory” images would not be located physically within the avkat_photos directory.
- I am still researching how to correctly execute this virtual directory to allow my hyperlinks to work correctly.
I am copying the spatial SQL database, named Avkat, and the tabular SQL database, named avkat_mysql, from the geodata server where they are live, to the beta server, gis.cofc.edu. Once they are both copied to gis, we will attempt to run our live beta version of the web viewer through these two beta databases. I will begin to merge the spatial and tabular databases into a single database for our beta production, in order to simplify the querying done by our system.
I discovered how to copy the tabular database from geodata to gisdata. That was easily done within sql server management studio. The spatial data requires extra steps. I attempted a plain copy using sql server but this will not work. The spatial database is a geodatabase that has certain schema properties set from arcgis server. I am not able to do this at the moment because our arcgis server version 10.1 is not running at the moment. I have found that all I have to do is use the Enable Enterprise Geodatabase tool in arccatalog to change the schema of a basic sql server database into an arc sde. This requires something called an authorization file.
“Provide the path and file name of the keycodes file that was created when you authorized ArcGIS for Server Enterprise. This file is in the \\Program Files\ESRI\License<release#>\sysgen folder on Windows and /arcgis/server/framework/runtime/.wine/drive_c/Program Files/ESRI/License<release#>/sysgen directory on Linux. If you have not already done so, authorize ArcGIS for Server to create this file.”
This authorization file is not there on our machine because we have been having trouble updating the version of arcgis server from 10.0 to 10.1. Once we can finish the install on our arcgis server, we will be able to create a new SDE geodatabase.
As an aside, Dr. Newhard asked me to post the following so we would not forget. We also discovered a change in how connecting to SDE databases works in arcgis 10.1. When connecting to Avkat as an SDE in ArcGIS, after creating the connection shortcut, you must right click on the database connection, then go to geodatabase connection properties and change the transactional version from DEFAULT to Published. This will allow it to show all of the changes made to it by us.
Today we have successfully moved the user interface from Earth to GIS. The new URL is http://gis.cofc.edu:10080/avkat.html. We will now move our 2 databases(avkat, avkat_mysql) over to server GIS to make a full migration of Avkat Informatics, without any ties to server Earth, or Geodata. We are currently digging through the code to find all of the data retrieval endpoints, which we will now direct to GIS. I have identified the main endpoint to be the file OpenDB.php. The endpoint has now been updated to gis.COUGARS.INT without any visible problems. We must continue to make sure all of the data pointers are consistent. The next step will be to connect the UI to our tabular data on avkat_mysql. Doing so will add functionality to the left panel of our current UI. Multiple breakthroughs were achieved today, progress should move much faster from this point.
We are now in the process of making a transition from hosting our current UI on earth to the Server GIS. We have migrated our Avkat project folder to GIS and are in the process of migrating our 2 Sql Server databases (Avkat and Avkat_MySql). We have run into a snag recently when looking through our project folder. It is currently unknown as to which of the UI code is actually running live. Once we identify the code of our current working version, we will then upload it to our proteus directory on GitHub. GitHub will help us manage our subsequent versions as we begin to manipulate the code. To get Proteus running live on GIS we must change the pointers to our databases. In our current live version the databases live on Geodata. Once we migrate our databases to GIS, we will be able to establish GIS as our full hosting server for the Avkat project. Doing so will help tremendously from an organizational standpoint. We are still waiting on the help of John Wall do direct us on the logistics of pushing proteus live onto GIS.
The last 5 weeks we have encountered numerous setbacks as well as many successes in moving forward.
- We have set up a workspace in the Visualization Lab at the College of Charleston Santee Cooper GIS Lab.
- This workstation currently has two machines running with arcgis 10.0 and numerous other programs
- We have three more machines ready to be imaged after the first two to provide extra work stations
- We have set up a development server called gis.cofc.edu with the programs necessary.
To set up our beta server we had to:
- Successfully identify where all of the databases are stored.
- We are in the process of copying the databases to the beta server for testing purposes.
- Find all of the relevant code for the website and then upload to our github repository for versioning
- We have copied the majority of the data from geodata and earth to our new beta server for testing
Problems we have encountered include:
- We attempted to implement arcgis server 10.1 on our beta server with the assistance of a local charleston called ROK Tech.
- After discussing our options we have decided to revert to the stable version of arcgis server 10.0, the newer version has many problems still to be solved and is not suitable at this time for live production hosting.
- We communicated with other members of the proteus team who had done the initial setup of the current alpha version of the website UI. Due to their busy schedules and their not being on location with us it took time to sort out where all of our data was located.
We will continue to work diligently on the beta server migration. Our next step is to setup a live closed beta of the current website UI that will be hosted on gis.cofc.edu. This will allow us to continue on to upgrading the UI, as we will then be starting with the base version that is the current standard.
This course is a directed study course with Dr. Jim Newhard, Dr. Norm Levine, and Dr. Paul Anderson at the College of Charleston that brings together the Classics, Geology, and Computer Science departments. Using advanced GIS techniques in combination with SQL server we will attempt to improve the current viewing solution to Avkat Informatics. I will be working alongside another student, Matt Mazzarell, to display the GIS data in the Web View on our server, earth.cofc.edu.
On Tuesday, September 11, 2012 Norm gave Matt and I a quick lesson on the mapping of network drives that will be used to ensure consistency across the project. This is the same mapping that is used by all involved in the GIS work at the College of Charleston. He also walked through some basic server setup and maintenance to allow us to have access to remote login to the server, earth. All involved in the Avkat project have been granted access to the server at this point and can proceed to work.
This course is a independent study in which we seek to improve the current AVKAT solution to view archaeologic information in the ArcGIS 10 environment. The web design and user interface for AVKAT has been built. Our task is to build a database in Microsoft SQL Server 2008 to house our Archeologic data and connect it with the ArcGIS Environment. This task will improve the functionality of the web interface by giving us the possibility to display and query relevant features of our collected data.
We begin with an array of administrative tasks to insure that we have access to all relevant data and programs across several key network drives. We will be using the server named ‘Earth’ for the bulk of our work in manipulating the AVKAT project. Today we have obtained all the sufficent credentials that we need to work on ‘Earth’. We have begun to familiarize ourselves with the AVKAT data and central program that drives our current user interface found at http://earth.cofc.edu:10080/avkat.html.
We have now begun working on developing the query interface for the Avkat Informatics project. The interface allows for the query of four parameters. Artifact Type, Time Period, and Unit Type correspond to specific attribute values. A second option for time allows a user to enter a range of values and retrieve all information that potentially falls within that date range. Lastly, the Search Comments of Forms allows users to enter in words to search the comments. The goal is to replicate the query interface of search engines familiar to the end-user.
The design of the Query Interface is highly important. Here, users access data in a fashion that is influenced both by the organizational structure of the database, the query, and the user’s internal notion of what would ‘make sense’. The goal of any data system is to provide structure and organization, while at the same time to allow for new systems of organization to emerge as the data is explored. The query interface is that point where the questions of the researcher are addressed by the data system, and the types of questions that the interface allows (or doesn’t allow) can reveal potential research biases latent in the database structure, such as:
- favoring one type of evidence over another (such as pottery shape over pottery function),
- the way in which time is divided into periods, thus reflecting biases towards ‘important’ events of history that may or may not reflect actual cultural shifts;
- potential artifact misclassification, owing to a lack of ‘unknown’ options
- the inability to attach two or more categories to an object due to uncertainty.
Our initial foray into the query interface is focused clearly upon the questions of ‘what’ and ‘when,’ and is designed to be a very bare-bones system to find and extract the data, understanding that further analysis will occur in 3rd-party applications. Our system is not intended to be the venue where complex analytics are performed. Our purpose is to provide an easy means of serving datasets to end users, who will manipulate the queried data within applications specifically constructed their needs.