World Wide Web F6840c11

Trang ChínhTrang Chính   Shop  GalleryGallery  Latest imagesLatest images  Tìm kiếmTìm kiếm  Đăng kýĐăng ký  Đăng NhậpĐăng Nhập  
Chào bạn, khách viếng thăm !
Đăng Ký
:: Quên mật khẩu ::



Trang 1 trong tổng số 1 trang
Share|

World Wide Web

Xem chủ đề cũ hơn Xem chủ đề mới hơn World Wide Web Collap12World Wide Web Collap13
boykoroile
boykoroile
Thông tin thành viên :
Click !
Nam
Tuổi : 29
Posts Posts : 12
Points Points : 31
Thanked Thanked : 5
Join date28/07/2011
Birthday Birthday : 25/02/1995
Đến từ Đến từ : it
I'm I'm : 18
Status : buon
avatar-dulieu : 50,11824|49,10923|64,12636
Nam Tuổi : 29
Posts Posts : 12
Points Points : 31
Thanked Thanked : 5
Join date28/07/2011
Birthday Birthday : 25/02/1995
Đến từ Đến từ : it
I'm I'm : 18
Status : buon
avatar-dulieu : 50,11824|49,10923|64,12636
   

Shop Avatar
Bài gửiTiêu đề: World Wide Web World Wide Web I_icon_minitimeMon Aug 01, 2011 6:15 am

buon
Tiêu đề: World Wide Web
---------------------------------------------------
Cho Ðiểm Chủ Ðề Này
The World Wide Web, abbreviated as WWW and commonly known as The Web, is a system of interlinked hypertext documents contained on the Internet. With a web browser, one can view web pages that may contain text, images, videos, and other multimedia and navigate between them by using hyperlinks. Using concepts from earlier hypertext systems, English engineer and computer scientist Sir Tim Berners Lee, now the Director of the World Wide Web Consortium, wrote a proposal in March 1989 for what would eventually become the World Wide Web.[1] He was later joined by Belgian computer scientist Robert Cailliau while both were working at CERN in Geneva, Switzerland. In 1990, they proposed using "HyperText [...] to link and access information of various kinds as a web of nodes in which the user can browse at will",[2] and released that web in December.[3]

"The World-Wide Web (W3) was developed to be a pool of human knowledge, which would allow collaborators in remote sites to share their ideas and all aspects of a common project." [4] If two projects are independently created, rather than have a central figure make the changes, the two bodies of information could form into one cohesive piece of work.

History


Main article: History of the World Wide Web

In March 1989, Sir Tim Berners-Lee wrote a proposal[5] that referenced ENQUIRE, a database and software project he had built in 1980, and described a more elaborate information management system.

With help from Robert Cailliau, he published a more formal proposal (on November 12, 1990) to build a "Hypertext project" called "WorldWideWeb" (one word, also "W3") as a "web" of "hypertext documents" to be viewed by "browsers", using a client-server architecture.[2] This proposal estimated that a read-only web would be developed within three months and that it would take six months to achieve, "the creation of new links and new material by readers, [so that] authorship becomes universal" as well as "the automatic notification of a reader when new material of interest to him/her has become available". See Web 2.0 and RSS/Atom, which have taken a little longer to mature.

The proposal had been modeled after the Dynatext SGML reader, by Electronic Book Technology, a spin-off from the Institute for Research in Information and Scholarship at Brown University. The Dynatext system, licensed by CERN, was technically advanced and was a key player in the extension of SGML ISO 8879:1986 to Hypermedia within HyTime, but it was considered too expensive and had an inappropriate licensing policy for use in the general high energy physics community, namely a fee for each document and each document alteration.

A NeXT Computer was used by Berners-Lee as the world's first web server and also to write the first web browser, WorldWideWeb, in 1990. By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web:[6] the first web browser (which was a web editor as well), the first web server, and the first web pages[7] which described the project itself. On August 6, 1991, he posted a short summary of the World Wide Web project on the alt.hypertext newsgroup.[8] This date also marked the debut of the Web as a publicly available service on the Internet. The first server outside Europe was set up at SLAC to host the SPIRES-HEP database. Accounts differ substantially as to the date of this event. The World Wide Web Consortium says December 1992[9], whereas SLAC itself claims 1991[10].

The crucial underlying concept of hypertext originated with older projects from the 1960s, such as the Hypertext Editing System (HES) at Brown University — among others Ted Nelson and Andries van Dam — Ted Nelson's Project Xanadu and Douglas Engelbart's oN-Line System (NLS). Both Nelson and Engelbart were in turn inspired by Vannevar Bush's microfilm-based "memex," which was described in the 1945 essay "As We May Think".[citation needed]

Berners-Lee's breakthrough was to marry hypertext to the Internet. In his book Weaving The Web, he explains that he had repeatedly suggested that a marriage between the two technologies was possible to members of both technical communities, but when no one took up his invitation, he finally tackled the project himself. In the process, he developed a system of globally unique identifiers for resources on the Web and elsewhere: the Universal Document Identifier (UDI) later known as Uniform Resource Locator (URL) and Uniform Resource Identifier (URI); and the publishing language HyperText Markup Language (HTML); and the Hypertext Transfer Protocol (HTTP).[11]

The World Wide Web had a number of differences from other hypertext systems that were then available. The Web required only unidirectional links rather than bidirectional ones. This made it possible for someone to link to another resource without action by the owner of that resource. It also significantly reduced the difficulty of implementing web servers and browsers (in comparison to earlier systems), but in turn presented the chronic problem of link rot. Unlike predecessors such as HyperCard, the World Wide Web was non-proprietary, making it possible to develop servers and clients independently and to add extensions without licensing restrictions. On April 30, 1993, CERN announced[12] that the World Wide Web would be free to anyone, with no fees due. Coming two months after the announcement that the Gopher protocol was no longer free to use, this produced a rapid shift away from Gopher and towards the Web. An early popular web browser was ViolaWWW, which was based upon HyperCard.

Scholars generally agree that a turning point for the World Wide Web began with the introduction[13] of the Mosaic web browser[14] in 1993, a graphical browser developed by a team at the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign (NCSA-UIUC), led by Marc Andreessen. Funding for Mosaic came from the U.S. High-Performance Computing and Communications Initiative, a funding program initiated by the High Performance Computing and Communication Act of 1991, one of several computing developments initiated by U.S. Senator Al Gore.[15] Prior to the release of Mosaic, graphics were not commonly mixed with text in web pages, and the Web's popularity was less than older protocols in use over the Internet, such as Gopher and Wide Area Information Servers (WAIS). Mosaic's graphical user interface allowed the Web to become, by far, the most popular Internet protocol.

The World Wide Web Consortium (W3C) was founded by Tim Berners-Lee after he left the European Organization for Nuclear Research (CERN) in October, 1994. It was founded at the Massachusetts Institute of Technology Laboratory for Computer Science (MIT/LCS) with support from the Defense Advanced Research Projects Agency (DARPA)—which had pioneered the Internet—and the European Commission. By the end of 1994, while the total number of websites was still minute compared to present standards, quite a number of notable websites were already active, many of whom are the precursors or inspiration for today's most popular services.

Connected by the existing Internet, other websites were created around the world, adding international standards for domain names and the HTML. Since then, Berners-Lee has played an active role in guiding the development of web standards (such as the markup languages in which web pages are composed), and in recent years has advocated his vision of a Semantic Web. The World Wide Web enabled the spread of information over the Internet through an easy-to-use and flexible format. It thus played an important role in popularizing use of the Internet.[16] Although the two terms are sometimes conflated in popular use, World Wide Web is not synonymous with Internet.[17] The Web is an application built on top of the Internet.

Function


The terms Internet and World Wide Web are often used in every-day speech without much distinction. However, the Internet and the World Wide Web are not one and the same. The Internet is a global system of interconnected computer networks. In contrast, the Web is one of the services that runs on the Internet. It is a collection of interconnected documents and other resources, linked by hyperlinks and URLs. In short, the Web is an application running on the Internet.[18] Viewing a web page on the World Wide Web normally begins either by typing the URL of the page into a web browser, or by following a hyperlink to that page or resource. The web browser then initiates a series of communication messages, behind the scenes, in order to fetch and display it.

First, the server-name portion of the URL is resolved into an IP address using the global, distributed Internet database known as the domain name system, or DNS. This IP address is necessary to contact the Web server. The browser then requests the resource by sending an HTTP request to the Web server at that particular address. In the case of a typical web page, the HTML text of the page is requested first and parsed immediately by the web browser, which then makes additional requests for images and any other files that form parts of the page. Statistics measuring a website's popularity are usually based either on the number of 'page views' or associated server 'hits' (file requests) that take place.

While receiving these files from the web server, browsers may progressively render the page onto the screen as specified by its HTML, CSS, and other web languages. Any images and other resources are incorporated to produce the on-screen web page that the user sees. Most web pages will themselves contain hyperlinks to other related pages and perhaps to downloads, source documents, definitions and other web resources. Such a collection of useful, related resources, interconnected via hypertext links, is what was dubbed a "web" of information. Making it available on the Internet created what Tim Berners-Lee first called the WorldWideWeb (in its original CamelCase, which was subsequently discarded) in November 1990.[2]

Linking


Over time, many web resources pointed to by hyperlinks disappear, relocate, or are replaced with different content. This makes hyperlinks obsolete, a phenomenon referred to in some circles as link rot and the hyperlinks affected by it are often called dead links. The ephemeral nature of the Web has prompted many efforts to archive web sites. The Internet Archive, active since 1996, is one of the best-known efforts.

[Dynamic updates of web pages


Main article: Ajax (programming)

JavaScript is a scripting language that was initially developed in 1995 by Brendan Eich, then of Netscape, for use within web pages.[19] The standardized version is ECMAScript.[19] To overcome some of the limitations of the page-by-page model described above, some web applications also use Ajax (asynchronous JavaScript and XML). JavaScript is delivered with the page that can make additional HTTP requests to the server, either in response to user actions such as mouse-clicks, or based on lapsed time. The server's responses are used to modify the current page rather than creating a new page with each response. Thus the server only needs to provide limited, incremental information. Since multiple Ajax requests can be handled at the same time, users can interact with a page even while data is being retrieved. Some web applications regularly poll the server to ask if new information is available.[20]

WWW prefix


Many web addresses begin with www, because of the long-standing practice of naming Internet hosts (servers) according to the services they provide. The hostname for a web server is often www, as it is ftp for an FTP server, and news or nntp for a USENET news server. These host names appear as Domain Name System (DNS) subdomain names, as in [You must be registered and logged in to see this link.] The use of such subdomain names is not required by any technical or policy standard; indeed, the first ever web server was called nxoc01.cern.ch,[21] and many web sites exist without a www subdomain prefix, or with some other prefix such as "www2", "secure", etc. These subdomain prefixes have no consequence; they are simply chosen names. Many web servers are set up such that both the domain by itself (e.g., example.com) and the www subdomain (e.g., [You must be registered and logged in to see this link.] refer to the same site, others require one form or the other, or they may map to different web sites.

When a single word is typed into the address bar and the return key is pressed, some web browsers automatically try adding "[You must be registered and logged in to see this link.] to the beginning of it and possibly ".com", ".org" and ".net" at the end. For example, typing 'apple' may resolve to [You must be registered and logged in to see this link.] and 'openoffice' to [You must be registered and logged in to see this link.]. This feature was beginning to be included in early versions of Mozilla Firefox (when it still had the working title 'Firebird') in early 2003.[22] It is reported that Microsoft was granted a US patent for the same idea in 2008, but only with regard to mobile devices.[23]

The 'http://' or 'https://' part of web addresses does have meaning: These refer to Hypertext Transfer Protocol and to HTTP Secure and so define the communication protocol that will be used to request and receive the page and all its images and other resources. The HTTP network protocol is fundamental to the way the World Wide Web works, and the encryption involved in HTTPS adds an essential layer if confidential information such as passwords or bank details are to be exchanged over the public internet. Web browsers often prepend this 'scheme' part to URLs too, if it is omitted. Despite this, Berners-Lee himself has admitted that the two 'forward slashes' (//) were in fact initially unnecessary[24]. In overview, RFC 2396 defined web URLs to have the following form: ://?#. Here is for example the web server (like [You must be registered and logged in to see this link.] and identifies the web page. The web server processes the , which can be data sent via a form, e.g., terms sent to a search engine, and the returned page depends on it. Finally, is not sent to the web server. It identifies the portion of the page which the browser shows first.

In English, www is pronounced by individually pronouncing the name of characters (double-u double-u double-u). Although some technical users pronounce it dub-dub-dub this is not widespread. The English writer Douglas Adams once quipped in The Independent on Sunday (1999): "The World Wide Web is the only thing I know of whose shortened form takes three times longer to say than what it's short for," with Stephen Fry later pronouncing it in his "Podgrammes" series of podcasts as "wuh wuh wuh." In Mandarin Chinese, World Wide Web is commonly translated via a phono-semantic matching to wàn wéi wǎng (维网), which satisfies www and literally means "myriad dimensional net",[25] a translation that very appropriately reflects the design concept and proliferation of the World Wide Web. Tim Berners-Lee's web-space states that World Wide Web is officially spelled as three separate words, each capitalized, with no intervening hyphens.

Privacy


Computer users, who save time and money, and who gain conveniences and entertainment, may or may not have surrendered the right to privacy in exchange for using a number of technologies including the Web.[27] Worldwide, more than a half billion people have used a social network service,[28] and of Americans who grew up with the Web, half created an online profile[29] and are part of a generational shift that could be changing norms.[30][31] Facebook progressed from U.S. college students to a 70% non-U.S. audience, and in 2009 prior to launching a beta test of the "transition tools" to set privacy preferences,[32] estimated that only 20% of its members use privacy settings.

Privacy representatives from 60 countries have resolved to ask for laws to complement industry self-regulation, for education for children and other minors who use the Web, and for default protections for users of social networks. They also believe data protection for personally identifiable information benefits business more than the sale of that information.[34] Users can opt-in to features in browsers to clear their personal histories locally and block some cookies and advertising networksbut they are still tracked in websites' server logs, and particularly web beaconsBerners-Lee and colleagues see hope in accountability and appropriate use achieved by extending the Web's architecture to policy awareness, perhaps with audit logging, reasoners and appliances. Among services paid for by advertising, Yahoo! could collect the most data about users of commercial websites, about 2,500 bits of information per month about each typical user of its site and its affiliated advertising network sites. Yahoo! was followed by MySpace with about half that potential and then by AOL–TimeWarner, Google, Facebook, Microsoft, and eBay.

Security


The Web has become criminals' preferred pathway for spreading malware. Cybercrime carried out on the Web can include identity theft, fraud, espionage and intelligence gathering.[39] Web-based vulnerabilities now outnumber traditional computer security concerns,[40][41] and as measured by Google, about one in ten web pages may contain malicious code.[42] Most Web-based attacks take place on legitimate websites, and most, as measured by Sophos, are hosted in the United States, China and Russia. The most common of all malware threats is SQL injection attacks against websites. Through HTML and URIs the Web was vulnerable to attacks like cross-site scripting (XSS) that came with the introduction of JavaScript[45] and were exacerbated to some degree by Web 2.0 and Ajax web design that favors the use of scripts. Today by one estimate, 70% of all websites are open to XSS attacks on their users.

Proposed solutions vary to extremes. Large security vendors like McAfee already design governance and compliance suites to meet post-9/11 regulations, and some, like Finjan have recommended active real-time inspection of code and all content regardless of its source.[39] Some have argued that for enterprise to see security as a business opportunity rather than a cost center,[49] "ubiquitous, always-on digital rights management" enforced in the infrastructure by a handful of organizations must replace the hundreds of companies that today secure data and networks.[50] Jonathan Zittrain has said users sharing responsibility for computing safety is far preferable to locking down the Internet.[51]

Standards


Main article: Web standards

Many formal standards and other technical specifications define the operation of different aspects of the World Wide Web, the Internet, and computer information exchange. Many of the documents are the work of the World Wide Web Consortium (W3C), headed by Berners-Lee, but some are produced by the Internet Engineering Task Force (IETF) and other organizations.

Usually, when web standards are discussed, the following publications are seen as foundational:


  • Recommendations for markup languages, especially HTML and XHTML, from the W3C. These define the structure and interpretation of hypertext documents.
  • Recommendations for stylesheets, especially CSS, from the W3C.
  • Standards for ECMAScript (usually in the form of JavaScript), from Ecma International.
  • Recommendations for the Document Object Model, from W3C.

Additional publications provide definitions of other essential technologies for the World Wide Web, including, but not limited to, the following:


  • Uniform Resource Identifier (URI), which is a universal system for referencing resources on the Internet, such as hypertext documents and images. URIs, often called URLs, are defined by the IETF's RFC 3986 / STD 66: Uniform Resource Identifier (URI): Generic Syntax, as well as its predecessors and numerous URI scheme-defining RFCs;
  • HyperText Transfer Protocol (HTTP), especially as defined by RFC 2616: HTTP/1.1 and RFC 2617: HTTP Authentication, which specify how the browser and server authenticate each other.

Accessibility


Access to the Web is for everyone regardless of disability including visual, auditory, physical, speech, cognitive, or neurological. Accessibility features also help others with temporary disabilities like a broken arm or the aging population as their abilities change.[52] The Web is used for receiving information as well as providing information and interacting with society, making it essential that the Web be accessible in order to provide equal access and equal opportunity to people with disabilities.[53] Tim Berners-Lee once noted, "The power of the Web is in its universality. Access by everyone regardless of disability is an essential aspect."[52] Many countries regulate web accessibility as a requirement for websites.[54] International cooperation in the W3C Web Accessibility Initiative led to simple guidelines that web content authors as well as software developers can use to make the Web accessible to persons who may or may not be using assistive technology.[52][55]

Internationalization


The W3C Internationalization Activity assures that web technology will work in all languages, scripts, and cultures.[56] Beginning in 2004 or 2005, Unicode gained ground and eventually in December 2007 surpassed both ASCII and Western European as the Web's most frequently used character encoding.[57] Originally RFC 3986 allowed resources to be identified by URI in a subset of US-ASCII. RFC 3987 allows more characters—any character in the Universal Character Set—and now a resource can be identified by IRI in any language.[58]

Statistics


According to a 2001 study, there were massively more than 550 billion documents on the Web, mostly in the invisible Web, or deep Web.[59] A 2002 survey of 2,024 million Web pages[60] determined that by far the most Web content was in English: 56.4%; next were pages in German (7.7%), French (5.6%), and Japanese (4.9%). A more recent studywhich used Web searches in 75 different languages to sample the Web, determined that there were over 11.5 billion Web pages in the publicly indexable Web as of the end of January 2005. As of March 2009[update], the indexable web contains at least 25.21 billion pages.[62] On July 25, 2008, Google software engineers Jesse Alpert and Nissan Hajaj announced that Google Search had discovered one trillion unique URLs.[63] As of May 2009[update], over 109.5 million websites operated. Of these 74% were commercial or other sites operating in the .com generic top-level domain.

Speed issues


Frustration over congestion issues in the Internet infrastructure and the high latency that results in slow browsing has led to an alternative, pejorative name for the World Wide Web: the World Wide Wait. Speeding up the Internet is an ongoing discussion over the use of peering and QoS technologies. Other solutions to reduce the World Wide Wait can be found at W3C.[66] Standard guidelines for ideal Web response times are:[67]


  • 0.1 second (one tenth of a second). Ideal response time. The user doesn't sense any interruption.
  • 1 second. Highest acceptable response time. Download times above 1 second interrupt the user experience.
  • 10 seconds. Unacceptable response time. The user experience is interrupted and the user is likely to leave the site or system.

Caching


If a user revisits a Web page after only a short interval, the page data may not need to be re-obtained from the source Web server. Almost all web browsers cache recently obtained data, usually on the local hard drive. HTTP requests sent by a browser will usually only ask for data that has changed since the last download. If the locally cached data are still current, it will be reused. Caching helps reduce the amount of Web traffic on the Internet. The decision about expiration is made independently for each downloaded file, whether image, stylesheet, JavaScript, HTML, or whatever other content the site may provide. Thus even on sites with highly dynamic content, many of the basic resources only need to be refreshed occasionally. Web site designers find it worthwhile to collate resources such as CSS data and JavaScript into a few site-wide files so that they can be cached efficiently. This helps reduce page download times and lowers demands on the Web server.

There are other components of the Internet that can cache Web content. Corporate and academic firewalls often cache Web resources requested by one user for the benefit of all. (See also Caching proxy server.) Some search engines also store cached content from websites. Apart from the facilities built into Web servers that can determine when files have been updated and so need to be re-sent, designers of dynamically generated Web pages can control the HTTP headers sent back to requesting users, so that transient or sensitive pages are not cached. Internet banking and news sites frequently use this facility. Data requested with an HTTP 'GET' is likely to be cached if other conditions are met; data obtained in response to a 'POST' is assumed to depend on the data that was POSTed and so is not cached.

Website


A website (also spelled Web site[1]) is a collection of related web pages, images, videos or other digital assets that are addressed relative to a common Uniform Resource Locator (URL), often consisting of only the domain name, or the IP address, and the root path ('/') in an Internet Protocol-based network. A web site is hosted on at least one web server, accessible via a network such as the Internet or a private local area network.

A web page is a document, typically written in plain text interspersed with formatting instructions of Hypertext Markup Language (HTML, XHTML). A web page may incorporate elements from other websites with suitable markup anchors.

Web pages are accessed and transported with the Hypertext Transfer Protocol (HTTP), which may optionally employ encryption (HTTP Secure, HTTPS) to provide security and privacy for the user of the web page content. The user's application, often a web browser, renders the page content according to its HTML markup instructions onto a display terminal.

All publicly accessible websites collectively constitute the World Wide Web.

The pages of a website can usually be accessed from a simple Uniform Resource Locator (URL) called the homepage. The URLs of the pages organize them into a hierarchy, although hyperlinking between them conveys the reader's perceived site structure and guides the reader's navigation of the site.

Some websites require a subscription to access some or all of their content. Examples of subscription sites include many business sites, parts of many news sites, academic journal sites, gaming sites, message boards, web-based e-mail, services, social networking websites, and sites providing real-time stock market data.

History


The World Wide Web (WWW) was created in 1989 by CERN physicist Tim Berners-Lee.[2] On 30 April 1993, CERN announced that the World Wide Web would be free to use for anyone.[3]

Before the introduction of HTML and HTTP, other protocols such as file transfer protocol and the gopher protocol were used to retrieve individual files from a server. These protocols offer a simple directory structure which the user navigates and chooses files to download. Documents were most often presented as plain text files without formatting or were encoded in word processor formats.

Overview


Organized by function, a website may be


  • a personal website
  • a commercial website
  • a government website
  • a non-profit organization website

It could be the work of an individual, a business or other organization, and is typically dedicated to some particular topic or purpose. Any website can contain a hyperlink to any other website, so the distinction between individual sites, as perceived by the user, may sometimes be blurred.

Websites are written in, or dynamically converted to, HTML (Hyper Text Markup Language) and are accessed using a software interface classified as a user agent. Web pages can be viewed or otherwise accessed from a range of computer-based and Internet-enabled devices of various sizes, including desktop computers, laptops, PDAs and cell phones.

A website is hosted on a computer system known as a web server, also called an HTTP server, and these terms can also refer to the software that runs on these systems and that retrieves and delivers the web pages in response to requests from the website users. Apache is the most commonly used web server software (according to Netcraft statistics) and Microsoft's Internet Information Server (IIS) is also commonly used.

Static website


Main article: static web page

A static website is one that has web pages stored on the server in the format that is sent to a client web browser. It is primarily coded in Hypertext Markup Language (HTML).

Simple forms or marketing examples of websites, such as classic website, a five-page website or a brochure website are often static websites, because they present pre-defined, static information to the user. This may include information about a company and its products and services via text, photos, animations, audio/video and interactive menus and navigation.

This type of website usually displays the same information to all visitors. Similar to handing out a printed brochure to customers or clients, a static website will generally provide consistent, standard information for an extended period of time. Although the website owner may make updates periodically, it is a manual process to edit the text, photos and other content and may require basic website design skills and software.

In summary, visitors are not able to control what information they receive via a static website, and must instead settle for whatever content the website owner has decided to offer at that time.

They are edited using four broad categories of software:


  • Text editors, such as Notepad or TextEdit, where content and HTML markup are manipulated directly within the editor program
  • WYSIWYG offline editors, such as Microsoft FrontPage and Adobe Dreamweaver (previously Macromedia Dreamweaver), with which the site is edited using a GUI interface and the final HTML markup is generated automatically by the editor software
  • WYSIWYG online editors which create media rich online presentation like web pages, widgets, intro, blogs, and other documents.
  • Template-based editors, such as Rapidweaver and iWeb, which allow users to quickly create and upload web pages to a web server without detailed HTML knowledge, as they pick a suitable template from a palette and add pictures and text to it in a desktop publishing fashion without direct manipulation of HTML code.

Dynamic website


Main article: dynamic web page

A dynamic website is one that changes or customizes itself frequently and automatically, based on certain criteria.

Dynamic websites can have two types of dynamic activity: Code and Content. Dynamic code is invisible or behind the scenes and dynamic content is visible or fully displayed.

Dynamic code


The first type is a web page with dynamic code. The code is constructed dynamically on the fly using active programming language instead of plain, static HTML.

A website with dynamic code refers to its construction or how it is built, and more specifically refers to the code used to create a single web page. A dynamic web page is generated on the fly by piecing together certain blocks of code, procedures or routines. A dynamically-generated web page would call various bits of information from a database and put them together in a pre-defined format to present the reader with a coherent page. It interacts with users in a variety of ways including by reading cookies recognizing users' previous history, session variables, server side variables etc., or by using direct interaction (form elements, mouseovers, etc.). A site can display the current state of a dialogue between users, monitor a changing situation, or provide information in some way personalized to the requirements of the individual user.

Dynamic content


The second type is a website with dynamic content displayed in plain view. Variable content is displayed dynamically on the fly based on certain criteria, usually by retrieving content stored in a database.

A website with dynamic content refers to how its messages, text, images and other information are displayed on the web page, and more specifically how its content changes at any given moment. The web page content varies based on certain criteria, either pre-defined rules or variable user input. For example, a website with a database of news articles can use a pre-defined rule which tells it to display all news articles for today's date. This type of dynamic website will automatically show the most current news articles on any given date. Another example of dynamic content is when a retail website with a database of media products allows a user to input a search request for the keyword Beatles. In response, the content of the web page will spontaneously change the way it looked before, and will then display a list of Beatles products like CD's, DVD's and books.

Purpose of dynamic websites


The main purpose of a dynamic website is automation. A dynamic website can operate more effectively, be built more efficiently and is easier to maintain, update and expand. It is much simpler to build a template and a database than to build hundreds or thousands of individual, static HTML web pages.

Software systems


There is a wide range of software systems, such as Java Server Pages (JSP), the PHP and Perl programming languages, Active Server Pages (ASP), YUMA and ColdFusion (CFML) that are available to generate dynamic web systems and dynamic sites. Sites may also include content that is retrieved from one or more databases or by using XML-based technologies such as RSS.

Static content may also be dynamically generated either periodically, or if certain conditions for regeneration occur (cached) in order to avoid the performance loss of initiating the dynamic engine on a per-user or per-connection basis.

Plug ins are available to expand the features and abilities of web browsers, which use them to show active content, such as Microsoft Silverlight, Adobe flash, Adobe Shockwave or applets written in Java. Dynamic HTML also provides for user interactivity and realtime element updating within web pages (i.e., pages don't have to be loaded or reloaded to effect any changes), mainly using the Document Object Model (DOM) and JavaScript, support which is built-in to most modern web browsers.

Turning a website into an income source is a common practice for web developers and website owners. There are several methods for creating a website business which fall into two broad categories, as defined below.

Content-based sites


Some websites derive revenue by selling advertising space on the site (see Contextual advertising).

Product- or service-based sites


Some websites derive revenue by offering products or services for sale. In the case of e-commerce websites, the products or services may be purchased at the website itself, by entering credit card or other payment information into a payment form on the site. While most business websites serve as a shop window for existing brick and mortar businesses, it is increasingly the case that some websites are businesses in their own right; that is, the products they offer are only available for purchase on the web.

Websites occasionally derive income from a combination of these two practices. For example, a website such as an online auctions website may charge the users of its auction service to list an auction, but also display third-party advertisements on the site, from which it derives further income.

Spelling


The forms website and web site are the most commonly used forms, the former especially in British English. Reuters, Microsoft, academia, book publishing, The Chicago Manual of Style, and dictionaries such as Merriam-Webster use the two-word, initially capitalized spelling Web site. This is because "Web" is not a general term but a short form of World Wide Web. As with many newly created terms,[which?] it may take some time before a common spelling is finalized.[original research?] This controversy also applies to derivative terms such as web page, web master, and web cam.

The Canadian Oxford Dictionary and the Canadian Press Style book list "website" and "web page" as the preferred spellings. The Oxford English Dictionary began using "website" as its standardized form in 2004.[4]

Bill Walsh, the copy chief of The Washington Post's national desk, and one of American English's foremost grammarians, argues for the two-word spelling with capital W in his books Lapsing into a Comma and The Elephants of Style, and on his site, the Slot.[5]

The AP Stylebook from the Associated Press initially[6] said "Web site" was the proper spelling, but the AP announced in April 2010 it would change to "website"[7].

Types of websites


There are many varieties of websites, each specializing in a particular type of content or use, and they may be arbitrarily classified in any number of ways. A few such classifications might include:[original research?]


  • Affiliate: enabled portal that renders not only its custom CMS but also syndicated content from other content providers for an agreed fee. There are usually three relationship tiers. Affiliate Agencies (e.g., Commission Junction), Advertisers (e.g., eBay) and consumer (e.g., Yahoo!).
  • Archive site: used to preserve valuable electronic content threatened with extinction. Two examples are: Internet Archive, which since 1996 has preserved billions of old (and new) web pages; and Google Groups, which in early 2005 was archiving over 845,000,000 messages posted to Usenet news/discussion groups.
  • Blog (web log): sites generally used to post online diaries which may include discussion forums (e.g., blogger, Xanga).
  • Brand building site: a site with the purpose of creating an experience of a brand online. These sites usually do not sell anything, but focus on building the brand. Brand building sites are most common for low-value, high-volume fast moving consumer goods (FMCG).
  • City Site: A site that shows information about a certain city or town and events that takes place in that town. Usually created by the city council or other "movers and shakers".

    • the same as those of geographic entities, such as cities and countries. For example, Richmond.com is the geodomain for Richmond, Virginia.

  • Community site: a site where persons with similar interests communicate with each other, usually by chat or message boards, such as MySpace or Facebook.
  • Content site: sites whose business is the creation and distribution of original content (e.g., Slate, About.com).
  • Corporate website: used to provide background information about a business, organization, or service.
  • Electronic commerce (e-commerce) site: a site offering goods and services for online sale and enabling online transactions for such sales.
  • Forum: a site where people discuss various topics.
  • Gripe site: a site devoted to the critique of a person, place, corporation, government, or institution.
  • Humor site: satirizes, parodies or otherwise exists solely to amuse.
  • Information site: contains content that is intended to inform visitors, but not necessarily for commercial purposes, such as: RateMyProfessors.com, Free Internet Lexicon and Encyclopedia. Most government, educational and non-profit institutions have an informational site.
  • Java applet site: contains software to run over the Web as a Web application.
  • Mirror site: A complete reproduction of a website.
  • Microblog : a short and simple form of blogging.
  • News site: similar to an information site, but dedicated to dispensing news and commentary.
  • Personal homepage: run by an individual or a small group (such as a family) that contains information or any content that the individual wishes to include. These are usually uploaded using a web hosting service such as Geocities.
  • Phish site: a website created to fraudulently acquire sensitive information, such as passwords and credit card details, by masquerading as a trustworthy person or business (such as Social Security Administration, PayPal) in an electronic communication (see Phishing).
  • Political site: A site on which people may voice political views.
  • Porn site: A site that shows sexually explicit content for enjoyment and relaxation, most likely in the form of an Internet gallery, dating site, blog, social networking, or video sharing.
  • Rating site: A site on which people can praise or disparage what is featured.
  • Review site: A site on which people can post reviews for products or services.
  • School site: a site on which teachers, students, or administrators can post information about current events at or involving their school. U.S. elementary-high school websites generally use k12 in the URL, such as kearney.k12.mo.us.
  • Search engine site: a site that provides general information and is intended as a gateway or lookup for other sites. A pure example is Google, and well-known sites include Yahoo! Search and Bing (search engine).
  • Shock site: includes images or other material that is intended to be offensive to most viewers (e.g. rotten.com).
  • Social bookmarking site: a site where users share other content from the Internet and rate and comment on the content. StumbleUpon and Digg are examples.
  • Social networking site: a site where users could communicate with one another and share media, such as pictures, videos, music, blogs, etc. with other users. These may include games and web applications.
  • Video sharing: A site that enables user to upload videos, such as YouTube and Google Video.
  • Warez: a site designed to host or link to copyrighted materials for the user to download illegally.
  • Web portal: a site that provides a starting point or a gateway to other resources on the Internet or an intranet.
  • Wiki site: a site which users collaboratively edit (such as Wikipedia and Wikihow).

Some websites may be included in one or more of these categories. For example, a business website may promote the business's products, but may also host informative documents, such as white papers. There are also numerous sub-categories to the ones listed above. For example, a porn site is a specific type of e-commerce site or business site (that is, it is trying to sell memberships for access to its site). A fan site may be a dedication from the owner to a particular celebrity.

Websites are constrained by architectural limits (e.g., the computing power dedicated to the website). Very large websites, such as Yahoo!, Microsoft, and Google employ many servers and load balancing equipment such as Cisco Content Services Switches to distribute visitor loads over multiple computers at multiple locations.

In February 2009, Netcraft, an Internet monitoring company that has tracked Web growth since 1995, reported that there were 215,675,903 websites with domain names and content on them in 2009, compared to just 18,000 websites in August 1995.

Awards


The Webby Awards are a set of awards presented to the world's best websites, a concept pioneered by Best of the Web in 1994.





----------------------------------------------
Privacy Policy for website

If you require any more information or have any questions about our privacy policy, please feel free to contact us by email at [You must be registered and logged in to see this link.]

At website, the privacy of our visitors is of extreme importance to us. This privacy policy document outlines the types of personal information is received and collected by website and how it is used.

Advertising:
website does use cookies to store information about visitors preferences, record user-specific information on which pages the user access or visit, customize Web page content based on visitors browser type or other information that the visitor sends via their browser.

Some of our advertising partners may use cookies and web beacons on our site. Our advertising partners include Google Adsense, Commission Junction, Adbrite, Amazon, .

These third-party ad servers or ad networks use technology to the advertisements and links that appear on website send directly to your browsers. They automatically receive your IP address when this occurs. Other technologies ( such as cookies, JavaScript, or Web Beacons ) may also be used by the third-party ad networks to measure the effectiveness of their advertisements and / or to personalize the advertising content that you see.

website has no access to or control over these cookies that are used by third-party advertisers.

You should consult the respective privacy policies of these third-party ad servers for more detailed information on their practices as well as for instructions about how to opt-out of certain practices. website's privacy policy does not apply to, and we cannot control the activities of, such other advertisers or web sites.

If you wish to disable cookies, you may do so through your individual browser options. More detailed information about cookie management with specific web browsers can be found at the browsers' respective websites.

Privacy Policy for Google adsense:
* Google, as a third party vendor, uses cookies to serve ads on your site.
* Google's use of the DART cookie enables it to serve ads to your users based on their visit to your sites and other sites on the Internet.
* Users may opt out of the use of the DART cookie by visiting



Tài sản: boykoroile
Chữ Ký: boykoroile
Về Đầu Trang
Go down

Chia sẻ
Add to Tagvn  Add to Linkhay  Add to TrumSEO  Add to Sig  Add to VietKick  Add to Buzz  Add to Google Buzz  Add to Facebook    

¨‘°ºO(¯°•. Xem tiếp 1 số bài viết cùng chuyên mục!.•°¯)Oº°‘¨

World Wide Web Collap10World Wide Web Collap13
Bài gửiTác giảTrả lờiLượt xemNgười gửi cuối


Trang 1 trong tổng số 1 trang

World Wide Web

Xem chủ đề cũ hơn Xem chủ đề mới hơn Về Đầu Trang

Trả lời nhanh

World Wide Web Collap10World Wide Web Collap13
-Khách viếng thăm vui lòng đăng nhập để có thể trả lời bài viết!
-Quyền hạn trong chuyên mục:
Bạn không có quyền gửi chủ đề mới
Bạn không có quyền sửa bài viết của mình
Bạn không có quyền trả lời bài viết

  » Yêu cầu viết bài bằng tiếng việt có dấu!
  » Không spam, đăng ảnh đồi truỵ hay sử dụng từ ngữ vô văn hoá!
  » Không gây xích mích, vui lòng tôn trọng thành viên khác!
  
  
Liên hê với chúng tôi | Http://ClubTeen9x.net |Trên lên trên
Diễn đàn sáng lập bởi: Mr.Ben
Diễn đàn phát triển bởi: Toàn bộ thành viên diễn đàn
Ðịa chỉ Mạng: ClubTeen9x.net
Ðiện thoại: 01228397557 - Email: nhocpro_clubteen9x@yahoo.com
Website Http://ClubTeen9x.net
Skin rip By Việt K
Powered by phpBB2® Version 2.0
Copyright © 2011, FORUMOTION - ClubTeen9x.net
Support by Forumotion. Diễn đàn hiển thị tốt nhất trên
với độ phân giải 1024x768 trở lên.

Free forum | ©phpBB | Free forum support | Báo cáo lạm dụng | Thảo luận mới nhất