Saturday, December 8, 2007

Introduction

Definitions of Common ASP Terms
Here are some working definitions and categorizations for analyzing trends and
developments in the ISP-to-ASP industry.These are merely suggestions, as
readers often have their own definitions. Pure-play ASP (which is defined later in
the chapter) examples are hard to find, so this book will use the following definitions
to give perspective in depicting critical developments within the service
provider industry.
What Is an Internet Service Provider?
An Internet service provider (ISP) is an organization that provides access to the
Internet. ISPs can provide service via modem, or dedicated or on-demand access.
Customers are generally billed a fixed rate per month, but other charges may
apply. Some ISPs allow Web sites to be created and maintained on the ISP’s
server.This along with e-mail allows smaller organizations to have a Web presence
with their own domain name. Some larger ISPs also provide news servers,
chat environments, and miscellaneous other services (such as Domain Names
Services (DNS), among others) in addition to Internet access.
What Is an Application Service Provider?
The ASP Industry Consortium, an alliance of companies formed to promote
and educate the IT industry, offers the following definition: “An ASP manages
and delivers application capabilities to multiple entities from a data center across a wide
area network.”
There are variations of this definition, and sometimes the definition and
meanings are confusing.To simplify this definition, an ASP is a third-party service
firm that deploys, manages, and remotely hosts a software application with centrally
located servers in a “rental” or lease arrangement.
An ASP is a mediator that facilitates remote, centrally managed “rent-anapplication”
services between a client and an independent software vendor (ISV).
The client does not own the application or the responsibilities that are associated
with initial installation and ongoing maintenance.The client, through a personal
computer (PC) thin client or an Internet browser, can access centralized computer
servers that host the application.The client then manages the results from
these external applications locally.
The Pure ASP
The definition of a pure ASP is an ASP that joins with a particular ISV, and performs
the initial application implementation and integration. In doing this, the
ASP manages the data center and provides continuous connectivity and support.
The ASP manages client relationships by acting as a complete end-to-end solution
provider.
It is possible for an ISV to bypass an ASP and work directly with the client,
and it is feasible that another company exists between the ASP and the end user.
As an example, Concentric Networks and Exodus Communications manage the
data center infrastructure for Corio; this is considered a “pure-play”ASP.
What Is Information Technology Outsourcing?
Information technology (IT) outsourcing is the transfer of an organization’s internal IT
infrastructure, staff, processes, or applications to an external resource provider.
Outsourcing can encompass anything from the simplest to the most sophisticated
IT infrastructure, processes, or applications. Usually, outsourcing contracts are created
to handle non-core information technologies or processes.
The outsourcing market can be divided into three main groups:
 Application outsourcing (AO)
 Business process outsourcing (BPO) and information utilities
 Platform IT outsourcing
Application Outsourcing
Application outsourcing (AO) is comprised of ASP and application maintenance
outsourcing (AMO), both of which are subcategories of the AO market.The
application provider is responsible for the management and maintenance of software
applications.The difference between an ASP and an AMO is who actually
owns the application.
An ASP remotely hosts and delivers packaged applications to the client from
a centralized location.The client is effectively “renting” the application on a peruser
or per-use basis. An AMO provides management for proprietary, packaged
applications from either the client side or the provider side.
Business Process Outsourcing
Business process outsourcing (BPO) and information utilities providers are primarily
concerned with economic and efficient outsourcing for the highly sophisticated
but repetitive business processes.These processes can be as complex as accounting
and finance, or more recurring processes such as payroll.The provider is responsible
for all of the processes associated with the business process.
Platform Information Technology Outsourcing
Platform IT outsourcing offers an array of data center services, such as facilities
management, onsite and offsite support services, data storage and security, and
disaster recovery.The main differentiation for this type of outsourcing is the
transfer of facilities and resources from the client to the provider.
The ultimate intention of an ASP is to allow the client to interact only with
the ASP for the services involved.The main elements for this integration are providing
the hardware, software, integration, testing, a network infrastructure that is
secure, reliable data center facilities, and qualified IT professionals who can
manage and maintain these services.
The most critical portions of the ASP channel are the ability to include software
vendors, systems implementation, integration, and ongoing support.These
components encompass the responsibilities that are necessary to effectively create
and administer an ASP solution.These responsibilities help define the development
of ASPs. Because of this, there are new opportunities for IT service
providers to establish themselves in these markets and still differentiate their service
offerings.
An ASP is capable of delivering any type of software application, from e-mail
and instant messaging applications to an enterprise resource planning (ERP)
system that can manage, control, and report on the multiple facets of the enterprise.
The ASP should be able to provide prepackaged applications, support services,
and the ability to tailor these packages based on client needs. Generally, the
ASP would like to keep these alterations down to a minimum, as customization
adds to complexity and the associated support issues. Several of the larger ASPs
have publicly stated that there is a lack of customization and they have limited
their implementations to core applications. Part of the reason that ASPs do this is
because they have negotiated short-term, nonexclusive licensing terms with ISVs,
which helps to minimize overhead costs.
The Elements That Make an ASP Viable
What do you need to check to see if the conversion to an ASP is a viable option
to you? There are several factors:
 Is there a reasonable demand either presently or in the immediate future
for your possible service offerings?
 Can the model that you plan to use support the possible growth that
may be unexpected?
 What can you expect for a return on investment (ROI)?
Several of these questions can be answered by planning the life cycle for the
cost of ownership.This is also a good way to gain potential customers, if you can
explain that their output would be economically unfeasible, and it would be
more cost efficient to use your services.
Life Cycle for the Cost of Ownership
What are the elements of the life cycle for cost of ownership? This section indicates
the items that must be incorporated into the internal cost of ownership
model, and the methodology that is used to determine the values associated with
those components.
Elements of the life-cycle cost included in this analysis are:
 The initial cost of hardware acquisition
 Hardware maintenance and associated costs
 Initial system software package acquisition
 Initial application software package acquisition
 Implementation
 The cost of hardware upgrades
 The cost of system software upgrades
 The cost of application software upgrades
 Network administration resources
 Other support (training, help desk, etc.)
The Initial Cost of Hardware Acquisition
This is an average selling price based on common discount levels available for
products from value-added resellers (VARs). In some instances, there may be no
volume discount applied. Also, keep in mind that there is the possibility to
acquire less expensive equipment from local outlets, but you must make sure that
they have the same quality of components or the complete package support of a
national reseller.
www.syngress.com
Hardware Maintenance and Associated Costs
The standard warranty for most vendor equipment is between one and three
years, and should cover most issues with problematic gear.The purchase of additional
service for the years following the hardware warranty period is added into
the hardware maintenance category.
Initial System Software Package Acquisition
The initial purchase price of system software such as a Unix platform or a
Microsoft Windows platform and their licensing are considered part of the initial
system software purchase.This category could also include software packages that
are necessary to run the applications on each machine. For example,WinFrame
for Windows Terminals operating system software would fall into this category.
Initial Application Software Package Acquisition
Initial application software acquisition is any application that assists in the productivity
of the organization.This could be an ERP package or some customer relationship
management (CRM) suite that will assist the company in management
and billing for its applications.
Implementation
This category represents the cost associated with initial implementation and configuration
of hardware and software, as well as costs associated with the ongoing
installation of expected upgrades.
Hardware Service Contracts
Hardware warrantees differ for each vendor and device in the infrastructure.
Manufacturers may offer these extended warranties as part of
their purchase plan, or there may be some type of agreement wherein
the vendor may cycle in new equipment based on the timeframe
involved. You should review what impact these and other service scenarios
will have on your business.
Designing & Planning…
Hardware Service Contracts
Hardware warrantees differ for each vendor and device in the infrastructure.
Manufacturers may offer these extended warranties as part of
their purchase plan, or there may be some type of agreement wherein
the vendor may cycle in new equipment based on the timeframe
involved. You should review what impact these and other service scenarios
will have on your business.
Initial System Software Package Acquisition
The initial purchase price of system software such as a Unix platform or a
Microsoft Windows platform and their licensing are considered part of the initial
system software purchase.This category could also include software packages that
are necessary to run the applications on each machine. For example,WinFrame
for Windows Terminals operating system software would fall into this category.
Initial Application Software Package Acquisition
Initial application software acquisition is any application that assists in the productivity
of the organization.This could be an ERP package or some customer relationship
management (CRM) suite that will assist the company in management
and billing for its applications.
Implementation
This category represents the cost associated with initial implementation and configuration
of hardware and software, as well as costs associated with the ongoing
installation of expected upgrades.
Some of the categories you can use to find out the initial implementation
cost include:
 The amount of time and resources that are necessary to install and configure
the equipment
 The amount of time and resources that are necessary to install and configure
the applications for the client base

The Business Case

The Business Case
ISP Market Conditions
The attraction to the ISP market is obvious. Users are adopting the technology
faster than virtually any other advancement in history. Internet access reached
50-percent market penetration in less than eight years of existence.The growth
rate in the United States is projected to be anywhere from 40 to 110 percent for
at least the next few years.The growth rate is even more impressive when you
measure bandwidth growth.
More importantly,much of the world has yet to be provided with Internet
access, particularly some of the world’s most populous nations such as India and
China.These nations, which count their populations in hundreds of millions if not
billions, have pent-up demand that is only increasing with the passage of time.
Even more attractive is the ever-increasing need for bandwidth. It has been
demonstrated that the dial-up connection is only an introduction connection to
the Internet. Users quickly lose patience with the slow speed of dial-up connections
and long for broadband access. Applications such as digital photography,
interactive content, and downloadable music only reduce the cycle-time for the
inevitable upgrade. Demand for Digital Versatile Disk (DVD) quality video and
other high-throughput applications has not even started its ascension, and this
drives the demand for connection speeds far higher than the 1.544 Mbps that is
now considered acceptable for a to medium-sized businesses.
Even residential users will require speeds exceeding those currently offered by
Digital Subscriber Lines (DSL), cable modems, and the like as they begin to
implement multiuser home networks, videoconference, and use collaborative
applications for business and pleasure from the home. Recreational activities such
as downloading feature films or efficiently trading entire albums will also drive
the need for additional bandwidth. Consumers will not accept the trip to the
movie store for that much longer, so the ability to access downloadable movies
100 times faster than anything that is currently available will be required to provide
almost immediate access to the majority of existing films and shows.
Internet connectivity has become almost a requirement for any business and
is quickly trending toward 90-percent penetration within the consumer market.
As the power of convergence is fully implemented, Internet connectivity will
become more of a necessity than connections to the Public Switched Telephone
Network (PSTN) are today. Access to telephone calls, high-quality television and
radio, as well as a multitude of other services will all be provided by a single connection.
The demand for value-added services is also increasing. Businesses and
consumers are having their Web sites hosted, data stored, and applications provided
across Internet connections.
If all of these reasons weren’t enough, the tremendous pace of technological
advance is providing faster and more reliable connections to meet the demands of
the consumer.These advances are providing new offerings such as wireless broadband
or private DSL-to-ATM (Asynchronous Transfer Mode) networks that solve
a host of problems. Customers will want to upgrade to these new services, which
will continuously push the revenue-per-user up for those service providers that
are able to add these new technologies to their product offerings.
ISPs that thrive in this environment stand to profit enormously. Normally
revenue-generating networks can become far more efficient at higher utilizations.
Those players with the largest user base will likely be able to develop impressive
economies of scale and develop barriers to entry that currently do not exist.Those
dominant players should also enjoy the best margins in a commoditizing business.
Figure 2.1 lists the services that are driving the demand for bandwidth.

Figure 2.1 Services That Are Driving the Demand for Bandwidth
Internet Access Internet Access
Interactive Gaming
Digital Audio
Voice over X
Managed VPN
Distance Learning
Managed Services
Cable Quality Video
Video conferencing
Video on Demand E-Commerce
Voice over X
ASP Services
Virtual PBX Service
Consumer Business
A second offering, CustomAuctions, enables members to buy or sell a wide
variety of telecommunication products through an online auction site. Items include
bandwidth, dark fiber, and minutes of capacity. Options include English-style auctions
(similar to EBay), reverse auctions (similar to Priceline.com), and sealed bid
auctions. RateXchange even provides strategically located delivery hubs to facilitate
participants’ access to each other’s networks. It is now possible to trade bandwidth
and fiber with no more difficulty or differentiation than steel or chemicals.
The result of these factors is lackluster income statements and very difficult
paths to profitability.The easily accessible capital that in many ways created the
current situation has now flocked to safer havens.The capital markets, venture
firms, and private investors that once courted the industry are now far more
selective in both debt and equity investments.The valuations of both instruments
have been severely impacted, virtually cutting off additional sources of capital for
the service provider space as a whole. Existing shareholders now demand profitability
in stark contrast to earlier requirements for market penetration.
Broadband—The Enabling Technology
Initially, the growth of broadband seemed to be the way to escape the strong
pricing pressures that dial-up providers faced. Significantly higher pricing was not
holding back explosive growth rates for broadband connections. Investors quickly
took notice, and capital flowed into the broadband segment. For a while, companies
such as Covad Communications, NorthPoint Communications, and
Rhythms NetConnections seemed like the exciting evolution of the ISP.
Unfortunately, as with dial-up access before, the realities of an undifferentiated
product and strong competition drove pricing down and demonstrated the inefficiencies
of their business model.
The reality of the DSL market is that providers must rely on the Incumbent
Local Exchange Carrier (ILEC) for the all-important connection to the customer.
This forces ISPs into the position of commodity resellers which puts them
in direct competition with their suppliers.The extreme pricing pressure inherent
in a commodity environment makes it difficult for new entrants into the DSL
segment to provide the low prices required by the market while still retaining
profitability. Additionally, many of the providers chose not to develop a sales
force, but instead contract yet another layer of resellers to bring their product to
market.The combination of these factors has already driven NorthPoint
Communications into bankruptcy and has put many others in financial jeopardy.
To date, the cable industry has been able to keep virtually all competition out
of their networks. It remains to be seen what the outcome will be, but all interested
providers should carefully study the lessons of DSL providers.While the
cable industry may not have to deal with direct competition on their infrastructure,
they will not be immune to the competitive access costs of other mediums
such as DSL, terrestrial wireless, or satellite. If they do not succumb to the pricing
pressures of the industry as a whole, they will see massive turnover within their
user base.They will face the same realities as all other providers and be required
to add additional services to drive revenue.
While broadband connections seem to be following the same economic pattern
as their slower counterparts, their significance should not be overlooked.
Increasing broadband access speeds will be the foundation for the value-added
services that will allow ISPs to differentiate their offerings. Bandwidth as a standalone
technology will not provide profitability for service providers, but the capabilities
of those connections and the advantages of packet-switched technologies
will allow ISPs to add services that are highly profitable.
The inherent capabilities of high-speed packet-switched infrastructures will
also perfectly position ISPs to capitalize on the shortcomings of legacy networks.
In addition to offering traditional data services, ISPs will be capable of aggregating
services that were previously provided by multiple disparate networks.
Examples include local service, secure point-to-point circuits, long distance, and
videoconferencing.

Directly Attached Storage

Directly Attached Storage
in Your Infrastructure
Server-to-storage access, or directly attached storage, has been in use in much of
the history of computing, and still exists in over 90 percent of implementations
today. An example of server-to-storage access, as shown in Figure 5.1, could be a
workstation that has an internal hard drive, or a networked server that has an
external disk array directly attached to it.
In these network implementations, storage devices are directly connected to a
server using either interfaces and/or bus architecture such as EIDE or SCSI. In
more recent implementations, it is common to find newer devices that use fiber
channel to directly attach to a server. Regardless of the method used to connect
these devices, they are all the same in architecture; a server or host is directly
connected to a storage device using a storage bus.
This is not a very flexible model with which to work. Given that some hosts
may require more storage space than others may, it is very difficult to move
capacity from one server to another.To do so, you would actually need to remove
hard drives from one storage array or device and install them in another device
when that device needs more space. Even with this solution, you may run out of
physical space in a storage array, and need to attach an additional array of disks.
All of this “upgrade” would require the reconfiguration of the storage device
and host systems, and would obviously become quite cumbersome and time consuming.
In addition to these drawbacks, performance is limited completely by the
directly attached server’s abilities and the central processing unit (CPU).
For instance, if a server is too busy doing calculations for other applications, it
will have to wait or free up valuable CPU clock cycles in order to read and write
from the storage device.This will impair its application and input/output (I/O)
performance significantly.This may be acceptable for someone’s personal computer,
but in a mission-critical, performance-impacted business environment, it
can prove to be a serious problem with severe consequences and limited options.
Network Attached Storage Solutions
Network attached storage (NAS) is one of the latest solutions to hit the streets.
When you hear someone talking about SAN, you usually hear “NAS” in the
same sentence.While they both provide methods for accessing data, and resolve
many file access issues when compared to traditional methods such as directly
attached storage, in practice they differ significantly.
A NAS is a device that provides server-to-server storage.What does this mean?
The answer is simple: It means that NAS is basically a massive array of disk storage
connected to a server that has been attached to a local area network (LAN) as
depicted in Figure 5.2. In fact, it is very simple, and means exactly what it states.
As an example, imagine a host accessing data on a NAS server.The actual
data is transmitted between these devices over their LAN interfaces, such as Fast
Ethernet, using a communications protocol such as Internet Protocol (IP) or
Internet Packet eXchange (IPX).
With the existing network infrastructure, the communications protocol might
also allow data to be transmitted between hosts that are extremely far apart. For
instance, a personal computer might access data on a file server that is thousands
www.syngress.com
of miles away by using the existing network infrastructure of the Internet, or a
customer computer might mount a drive on a remote server over a private wide
area network (WAN) connection such as a T1. In both of these cases, the server
being accessed is, for all intents and purposes, acting as NAS.
This can provide a great solution for applications, and will more than likely
be the method most of your customers will use to connect to data that resides on
your systems. It offers quite a lot of flexibility and requires very few upgrades to
your network infrastructure.We already discussed the best benefit of this type of
architecture, but it bears repeating it here: you can use your existing network
infrastructure for accessing data that resides on NAS servers.
There can be some serious drawbacks that are inherent to this solution,
though. Probably the most important is the impact that such an architecture will
have on your LAN and WAN.When we talk about sharing data, we might mean
terabytes of data. Using a NAS device can easily bottleneck your network and
seriously impact some of the other applications within your network.
I do not want to scare you away from this architecture, because it is still a
very viable and robust solution. In fact, when connecting hosts or servers to data
over very long distances, it is still a very good solution, and sometimes the only
option available. Many of your customers will more than likely already have an
existing connection into your network, so it becomes easy to add services with
very little impact on your other clients. Some methods can be used to help eliminate
the impact that a cluster of SAN devices might impose on your network.
Quality of Service
You can combat network performance problems by designing Quality of Service
(QoS) into your network. In fact, we recommend using QoS throughout your
network, even if you decide not to use NAS.QoS has the ability to delegate priority
to the packets traversing your network, forcing data with a lower priority to
be queued in times of heavy use, and allowing for data with a higher priority to
still be transmitted.
A well-designed and implemented QoS schema can definitely help eliminate
the impact that large volumes of data may have on other time-sensitive data, but
it could still expose your network to a level of latency that is capable of growing
exponentially.This is especially true if you do not plan correctly.When designing
QoS in your network, it is very important to look at all the data traversing your
network, and carefully weigh the advantages and disadvantages of using a particular
QoS strategy and its effect on types of data and the network as a whole.
Location of NAS in Your Network
When designing NAS in your network, probably the most effective solution
for latency and saturation issues is the location of your NAS servers in relation to
the hosts and systems that access their data.The placement of NAS devices
becomes extremely important, and performance can vary significantly depending
on your design.
For instance, if you have a single large cluster of NAS devices in the middle
of your network, all hosts will need to traverse deep into your network in order
to access the servers and data. Consequently, you will have large amounts of data
flooding every part of your network that will more than likely create serious bottlenecks
and latency issues at every step along the way.
In contrast, if you were to use smaller clusters of SAN devices, and locate
these groupings close to the hosts that access them, the hosts will not need to traverse
your network to access the NAS servers, thereby keeping network saturation
to a minimum.
Unfortunately, there is no clear and concise way to design NAS in your network.
Your ultimate design will depend greatly on your current and future
growth patterns. As a general rule, remember that NAS devices should always be
kept as close as possible to the devices that access them. However, always keep
their purpose in mind, as well as who will be accessing the data, patterns of
usage, and the costs associated with distributing these systems.
In some cases, you may have very few clients accessing the data, or saturation
may prove to be the downfall of your network or a nonissue. However, when
comparing price versus performance issues, try to keep your projected future
growth in mind, as it can significantly alter the decision-making process.
Storage Area Networks
A storage area network (SAN) is a networked storage infrastructure that interconnects
storage devices with associated servers. It is currently the most cutting-edge
storage technology available, and provides direct and indirect connections to multiple
servers and multiple storage devices simultaneously.
With the use of technologies such as Fiber Channel, the SAN actually
extends the storage bus, thereby allowing you to place servers far away from the
storage devices that they access. In fact, the servers may be housed at locations
that are completely separate from the site housing the storage. In this situation,
we would be taking advantage of one of the greatest features that SAN technology
provides.
A SAN can be thought of as a simple network that builds off the familiar
LAN design. Instead of connecting hosts with other hosts and servers, it is
designed to connect servers and hosts with a wide range of storage devices.A
SAN uses network hardware that is very similar to what can be found in a typical
LAN, and even includes the use of hubs (very rarely), switches, and routers. In its
most basic form, it could be thought of as a LAN that is dedicated solely to
accessing and manipulating data.
The Need for SAN
There are several scenarios behind the move to storage area networks.The major
one is the need to manage the dramatically increasing volume of business data,
and to mitigate its effect on network performance.The key factors include:
 E-business Securely transforms internal business processes and improves
business relationships to facilitate the buying and selling of goods, services,
and information through the Internet.
 Globalization The extension of information technology (IT) systems
across international boundaries.
 Zero latency The need to exchange information immediately so you
can maintain a competitive advantage.
 Transformation The ability to adapt, while maintaining the ability to
immediately access and process information that drives successful business
decisions.
Distributed computing, client/server applications, and open systems give
today’s enterprises the power to fully integrate hardware and software from different
vendors to create systems tailored to their specific needs.These systems can
be fast, efficient, and capable of providing a competitive edge.
Unfortunately, many enterprises have taken a far less proactive approach with
their storage systems. Storage, unlike a Web application server or a database
system, is rarely viewed as a strategic tool for the enterprise; this view, however, is
beginning to change.
With the explosive growth of e-business, IT managers are working very hard to
keep pace with managing the significant growth of data (multiple Terabytes, if not
Exabytes, per year).They are installing high-performance storage systems to meet
the demands for smaller backup windows and greater application availability.
However, these systems are sometimes much more complex and expensive to
manage. In addition, they are often single platform, restricting access to data across
the network.To improve data access and reduce costs, IT managers are now seeking
innovative ways to simplify storage management, and SAN is a promising solution.
Benefits of SAN
SANs remove data traffic—backup processes, for example—from the production
network, giving IT managers a strategic way to improve system performance and
application availability. Storage area networks improve data access. Using Fiber
Channel connections, SANs provide the high-speed network communications
and distance needed by remote workstations and servers to easily access shared
data storage pools.
IT managers can more easily centralize management of their storage systems
and consolidate backups, increasing overall system efficiency.The increased distances
provided by Fiber Channel technology make it easier to deploy remote
disaster recovery sites. Fiber Channel and switched fabric technology can help
eliminate single points of failure on the network.
With a SAN, virtually unlimited expansion is possible with hubs (again, very
rarely) and switches. Nodes can be removed or added with minimal disruption to
the network. By implementing a SAN to support your business, you can realize:
 Improved administration Consolidation and centralized management
and control can result in cost savings. By allowing for any-to-any connectivity,
advanced load-balancing systems and storage management
infrastructures, you can significantly improve resource utilization.
 Improved availability With a SAN, high availability can be provided
more effectively at lower cost.
 Increased business flexibility Data sharing is increased, while the
need to transform data across multiple platforms is reduced.
One of the main advantages of owning and operating a SAN is that it offers a
secondary path for file transfers, while keeping the LAN free for other types of
data communication. Figure 5.3 shows that the SAN is a separate network from
the LAN, and truly provides a secondary path for file transfers.

ASP Security system

Security Policy
An ASP needs to develop a general security policy that addresses how it manages
and maintains the internal security posture of its infrastructure. Issues such as
password management, security auditing, dial-in access, and Internet access are
some examples of the areas that should be addressed in a security policy.The
policy is the written manifestation of current security requirements and guidelines,
as well as procedures that your ASP consistently uses.
Consistent policies will give clarity within the ASP about what steps to take
to ensure a minimal amount of security. If the ASP is to see immediate improvement
with its security position, establishing security policies is the logical step to
follow assessment, and should be initiated as an adjunct to security planning.
As the plan for security management unfolds, the specific elements within the
environment may change. As changes occur, the policies should be reviewed and
modified to ensure that they communicate the current plan for protecting your
ASP environment. Security policies should be reviewed at least every six months
to verify the validity of the policy, and they should be updated every time the
policy changes regardless of the reason.Therefore, security policies should be a
continual work in progress.
Developing a Security Policy
To develop a comprehensive security policy, you will first need to understand
what it is that makes for a good security policy. In general, a security policy
defines how an ASP manages, protects, and distributes sensitive information and
resources.Any ASP, before connecting to the Internet, should develop a usage
policy that clearly identifies the solutions they will be using and exactly how
those solutions will be used.
First, the policy should be clear, concise, and understandable, with a large
amount of flexibility, and some type of built-in mechanism that allows for periodic
revisions and alterations as changes become necessary.
Second, you will need to define the requirements to which the security
policy will adhere.To provide this, it will be necessary to draw on your usage
policy, and to use it as a guide for defining the security policy.This is necessary to
maintain the required functionality while providing the security function.Your
requirements should include the external customer demands as defined within
your service level agreements (SLAs), external legal requirements concerning
security, external supplier security policies, your internal security policies, and
other security policies that relate to integration of customer environments with
your company.
Third, you need to understand what needs to be protected.This might
include, but not be limited to, computer resources, critical systems, sensitive systems,
customer and company data, critical data, sensitive data, and public data.To
help you evaluate your individual system needs, it would be helpful to make a list
of all the nodes in your network, and to designate each of these with a level of
security.
For instance, a public machine that poses few consequences if it were to
become compromised might be considered low security; a Web server might be
considered medium security; and your financial databases might be considered
high security. Be careful when designating low-security systems, though. Just
because a system may not contain any sensitive data does not mean that they are
not a threat; if they have access to devices that do include sensitive data, they
might be used as a springboard to access other systems within the network.
Fourth, you need to define the security policy guidelines.To accomplish this,
two policies should be written; the first should consist of a high-level policy
written from the customers’ perspective, and should be a simple document that
gets directly to the point.You should base this document on security rationale,
and should have very little technical information.
A second low-level policy should also be written for security implementers,
and should include detailed technical descriptions of procedures, filtering rules,
and so forth.This document should clearly and concisely outline the exact security
procedures, and should only be viewable by those who require the information.
If such a document were to become publicly accessible, it could be used
against your systems maliciously by identifying possible holes in your security
policy and thus displaying methods into your network.
For instance, if you are using packet filtering to only allow traffic from a specific
network, it might be possible for a would-be cracker to spoof an IP address
that is in the accepted range in order to compromise your systems. Because of
this, it is best to keep your security policy very secure.
Finally, you must ensure that your security policy is based on actual customer
situations, while remaining clear, concise, consistent, and understandable.
Furthermore, to ensure a good security policy requires a periodic evaluation of
the effectiveness of the current security systems, as well as periodic evaluation of
the actual system configurations, or at least the security relevant components.
Sometimes it may even be beneficial to hire a third-party security firm to
provide an unbiased evaluation and assessment of your security systems. In many
cases, they may discover issues that you did not, and they might be able to suggest
possible fixes for some of the issues they encounter.
In addition, it is sometimes easier to sell your customers on your security
posture if an evaluation was performed by an outside security organization. It
could at least help to instill your customers with confidence in your organization.
Security Components
As an ASP, to validate both the security policy and the privacy policy, a review of
the various security mechanisms and methods used to implement those policies is
required.At a minimum, the following security components should be considered:
 Authentication
 Confidentiality
 Incident response
 Security auditing
 Risk assessment
Authentication
One of the most important methods to provide accurate security is the ability to
authenticate users and systems. In fact, all of your security mechanisms will be
based on authentication in one way or another. As an example, you will need to
authenticate users and nodes that access data on your systems.The authentication
might take the form of a username and password, or an access list that governs
access from a particular system’s IP address to another system’s IP address.
You may even use a different method entirely, or a combination of methods.
Regardless of the method used, it is apparent that without the ability to guarantee
or reveal the authenticity of a user or host, it is impossible to guarantee security. In
fact, the success of your security mechanisms will hinge greatly on the methods of
authentication they incorporate and you employ throughout your network.
User Authentication
A requirement for any ASP is the ability to positively identify and authenticate
users. Depending on the level of security required, the mechanisms to support
this requirement can range from identifying users based on usernames and passwords,
to personal identification numbers (PINs) and digital certificates.
Usernames and Passwords
The use of usernames and passwords is one of the most ancient of all authentication
schemes. I am sure at some point you have had to enter a username or password
to gain access to a resource, or even to log in to your own personal
computer.This being the case, you are probably already familiar with some of the
security concerns associated with the use of passwords such as not to share them
with others and to keep them private.
To accomplish this, you are aware that you are not supposed to write your
password on a piece of paper that is taped to your monitor, or that you should
not use a password that is easy to guess, such as your first name. However, just
because you understand these cardinal rules, it does not always follow that others
will too. Because of this, it is always important to set password guidelines for your
users, and make certain they adhere to those guidelines.
When evaluating identification and authentication mechanisms, you need to
consider both the mechanism and the implementation. A standard user ID and
password scheme should have a minimum password length of at least eight characters,
and require passwords to be nondictionary words. In addition, the implementation
should limit unauthorized access attempts and, at a minimum, after a
fixed number of failed attempts, lock out the account for some specified period.
If the account is locked out multiple times, it should be locked until an administrator
can speak with the owner of the account.
Personal Identification Numbers
A personal identification number (PIN) provides another mechanism that you can
use to enhance the security of a standard username and password system. In most
implementations, users log in to an ASP with their username and password. Once
validated, the users are asked to enter their PIN, which is usually a numerical
value that is predefined and known only by the user and authentication mechanism.
The PIN provides an extra level of access control, but can still be overcome
fairly easily.
Digital Certificates
Deploying digital certificate technology would be a more robust access control
mechanism.Today, the trend seems to lean toward a digital certificate-based
solution that not only validates the user, but also enables the establishment of a
session encryption key to support confidentiality of the transaction once the user
is authenticated.
If you use usernames and passwords solely for authentication services, you
may be exposing your ASP to an easy attack. If, for instance, an attacker were to
gain access to a system by compromising a username and password, he or she
would have access to all resources for which the account is privileged.This might
allow the attacker access to a single host or numerous hosts in your network. It
could also give him or her the opportunity to access and alter data, as well as
wreak havoc on your systems and their functionality.
There are numerous methods an attacker can use to bypass password-based
security mechanisms, the most popular of which are network sniffing and brute
force.

The Design Process

The Design Process
An internetwork requires many layers of thought and design that encompasses
everything from physical space to future network considerations.There are generally
three components when designing a large internetwork: data center networks,
wide area networks (WAN), and remote users (in this case, your external
clients).
 Data center networks are generally comprised of locally housed equipment
that will service your clients from a building, or set of buildings.
 Wide area networks are the connections between the data center and
the customer.
 Remote users are your clients and telecommuter traffic that are subsets
of your main clients.
Designing the network is a challenging task, but as I said earlier, you probably
are doing this job because you like a challenge.You must take into account that
each of the three components has its own distinct requirements. For example, an
internetwork that is comprised of five meshed platforms can create all sorts of
unpredictable problems, so attempting to create an even larger series of intermeshed
networks that connect multiple customers who have their own network
issues can be downright mind-boggling.
This is an age in which equipment is getting faster, sometimes by being more
granular in the services that are offered, and other times, allowing more to be
done within a single chassis. Infrastructure design is becoming more difficult due
to ASPs moving toward more sophisticated environments that use multiple protocols,
multiple media types, and allowing connections to domains outside your
areas of influence because of customer requirements.
One of the greatest trade-offs in linking local area networks (LANs) and
WANs into a packet-switching data network (PSDN) infrastructure is between
cost and performance.The ideal solution would optimize packet-based services,
yet this optimization should not be interpreted to picking the mix of services
that would represent the lowest possible tolls.Your customers are going to want
speed, availability, and ease of use.To successfully implement a packet-service
infrastructure, you should adhere to two basic rules:
 When designing and implementing a packet-switching solution, try to
balance cost savings with your company’s internal performance requirements
and promises to its customers.
 Try to build a manageable solution that can scale when more links and
services are required by your company’s clientele.
Designing with the Hierarchy in Mind
One of the most beneficial tasks that you can perform in the design of your network
is to create a hierarchical internetwork design that will modularize the elements
of a large internetwork into layers of internetworking.The key layers that
will help you create these modules in this model are the Access, Distribution, and
Core routing layers.
The hierarchical approach attempts to split networks into subnetworks, so
that traffic and nodes can be more readily managed. Hierarchical designs assist in
the scaling of internetworks, because new subnetworks and technologies can be
integrated into the infrastructure without disrupting the existing backbone.This
also makes the swapping out of equipment and upgrades much easier, because it
is a modular environment. Figure 8.2 illustrates the basic approach to hierarchical
Some advantages of a hierarchical approach include:
 Inherent scalability
 Easier to manage
 Allows for the optimization of broadcast and multicast control traffic
Note that this three-tier model is defined by the Core, Distribution, and
Access layers:
 The Core layer is where the backbone of the network is located and is
the central point that data must traverse.This area should be designed for
speed.The most important aspect of this layer is to pass information to
the rest of the network.The core should have a meshed, redundant
design for higher efficiency.
 The Distribution layer is where your border routers are located. Most
of the routing decisions should be made at this level.This is the area
where you would implement policies for the network.
 The Access layer is the customer’s network.This area may allow you
the least control because differing media and protocols may be used.This
is usually the most over-subscribed part of the network.
Scalability of Hierarchical Internetworks
Hierarchical internetworks are more scalable, because they allow you to grow
your internetwork in a gradual way with the implementation of modules.This
allows an infrastructure to grow in increments without running into the limitations
that are normally associated with flat, nonhierarchical infrastructures.The
drawback is that hierarchical internetworks require careful planning and implementation.
There are many issues to consider when designing your network,
including:
 The costs that are included in virtual circuits
 The complexity that will be inherent in a hierarchical design
(particularly when integrated with a meshed topology)
 The need for additional hardware interfaces, which are necessary to
separate the layers within your hierarchy
 The scalability of the software and routing protocols
To fully utilize a hierarchical design, you should create your hierarchy with
your regional topologies in mind. Remember that the specifics of the design will
depend on the services you implement, as well as your requirements for fault
tolerance, cost, and overall performance.Always think,“How can I get the most
out of this design, and what are the potential problems that could arise?”
Manageability of Hierarchical Internetworks
There are management advantages inherent to hierarchical designs, such as the
flexibility of your design, the ease of installing these modular segments into your
network, and the management of fewer peers to your main convergence points.
 Design flexibility Designs that use the hierarchical approach will provide
greater flexibility in the use of WAN circuit services. Leased lines
can be implemented in the Core, Distribution, and Access layers of the
internetwork.
 Internetwork ease By adopting a hierarchical design, you will reduce
the overall complexity of an internetwork by being able to separate the
components into smaller units.This will make troubleshooting easier,
and provide protection against broadcast storms, routing loops, or other
potential problems.
 Hardware management The complexity of individual router and
switch configurations is greatly reduced, because each router has fewer
peers with which they need to communicate.