SitePoint Sponsor

User Tag List

Results 1 to 4 of 4
  1. #1
    SitePoint Zealot biggie2's Avatar
    Join Date
    Feb 2006
    Posts
    184
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)

    software to map out a site architecture

    is there a software or web app that can map out the site architecture of a website?

    i'm doing a redesign of a site that alot of pages. i was using a spreadsheet to list and map out the major sections and the pages within each section along with it's respective urls. But that is time consuming.

  2. #2
    SitePoint Wizard
    Join Date
    Dec 2003
    Location
    USA
    Posts
    2,582
    Mentioned
    29 Post(s)
    Tagged
    0 Thread(s)
    I don't know of any software explicitly for mapping out the architecture of a website, though you could probably adapt software for other mapping systems like UML for your purposes.

    I always just like to use the low-tech version... lots of paper.

    Another idea would be along the same lines as your spreadsheet. Try to group the various pages into logical categories. Then instead of having to map out every single individual page, you can just map out how each category would fit in with the site.

    Depending on the structuring of the site, you could also use a spidering utility to crawl your site and pull out a list of their titles and URLs.

  3. #3
    SitePoint Zealot biggie2's Avatar
    Join Date
    Feb 2006
    Posts
    184
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    hmm..spidering utility sounds good. got a recommendation? but wouldn't the site's navigation have to be text based or have a sitemap in order to be indexed properly?

  4. #4
    SitePoint Wizard
    Join Date
    Dec 2003
    Location
    USA
    Posts
    2,582
    Mentioned
    29 Post(s)
    Tagged
    0 Thread(s)
    There are hundreds of free spidering softwares. Here is one such that generates a list of links: http://www.webconfs.com/search-engin...-simulator.php

    To be spidered it doesn't necessarily have to have a text-based navigation or sitemap (though both of those should be there for good practices). How spiders work is they get all the links on that page (that are local). Then it goes to all of those pages and gets all those links, and repeat until it runs out of pages that it hasn't checked.

    So, if there are pages that aren't linked anywhere (or linked only within pages that require a log in), then it won't get those. But all others it will get.

    You can also write your own spidering software pretty easily. Simply have it read in the text from a page you specify and find all anchor elements. Then add all of the hrefs from those to an array and add whatever information about the specific page you are on to another array. Then just go to the next page and do the same thing, making sure you don't ever put the same link into the array.

    Hope that helps.


Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •