Skip to content

By admin in Uncategorized

taacMany women are scared to look old or even grow old! During the early centuries, women struggle to find the solution for their skin hanging low or starting to wrinkle. Every woman’s goal is to look good and beautiful and looking old or wrinkly is unacceptable. That is why until now; they still strive to look for something that will make them look forever young. Thanks to science, anti aging creams were invented. These creams do not stop you from looking older but can slow down the aging of your skin.

Anti aging creams are widely manufactured, consumed and used all over the world. Mostly women are the top consumers and the rest are men who are age-conscious. Since the early centuries, women tried a lot of creams and even mud as anti aging creams to keep their face young. Every type or kind of cream has a duration, though. Some hold off aging for long while some do not. The duration depends on the chemicals used. Herbal ingredients may sound good for aging creams but they are not proven effective. They only sound so inviting because it contains herbal or natural ingredients. Meaning there is low risk of chemical side effects. Researchers have not compared the effect and duration of creams with herbal extracts and chemicals.

Will Creams Cause Allergies To My Skin?

Every time you try on something to your skin, do not forget to find out what your skin type is first. A skin allergy is the most common and basic kind of skin reaction. Why does this happen? People tend to ask other people, “What is the best wrinkle cream?”without finding out what they are allergic to. By allergy, it does not always mean it has something to do with food. There are also allergies caused by chemicals taken by the skin.

Remember that before you ask, “What is the best wrinkle cream?”you should research first what your chosen product is made of. Is it chemical or herbal? If it is chemical, you might want to ask your dermatologist first if this and that chemical will cause unwanted reaction to your skin. If it is herbal, research first what the effect of this herb is. Herbs are most likely hypo-allergenic. Most of the time, herbs are the answer to your question, “What is the best wrinkle cream?”because of it being natural and hypo-allergenic. There are a lot of reasons why herbal creams are more preferred than the creams that are made of chemicals. Keep in mind not everyone is immune to skin chemical reaction.

Tags: , ,

Making The Web Smoke

October 15, 2013
By admin in Distributed Computing

Like a lot of frequent Internet users, I find myself using the Web early in the morning or late at night, when response times are better. As with mainframes in the bad old days, the reason for the odd working hours is too much centralization.

wsThe Internet is supposed to be, and largely is, a distributed environment. But in the case of the Web, the theory of distributed computing is bumping up against the reality: Lots of users are retrieving information directly from central, overloaded servers.

More mature protocols, such as FTP, Usenet and DNS (Domain Naming System), have much better tools for distributing the load. If the Web worked like DNS, for example, we would be accessing the Web through a network of proxy servers and retrieving pages from the nearest server with fresh copies. Maybe someday. In the meantime, distributing the load is up to the providers of the information.

The most common response to poor performance is to increase bandwidth, but that’s not the most effective approach.

When this site started getting bogged down a couple of years ago, we looked first at the speed of our Internet connection. Surprisingly, utilization of our T-1 connection was hovering around just 20 percent, even during the worst traffic storms. This can partially be explained by the inefficiencies of HTTP that I talked about in last week’s column, but the main problem was overloaded servers, not limited bandwidth.

ZDNet has effectively alleviated performance problems by moving high-bandwidth services such as PC Magazine’s FTP archives and the PC Week Radio audio service to separate machines or even to separate locations (so they are using different Internet connections and equipping each server with lots of memory).

The next step is to go from one machine for each service to several machines for each service. The most common way of achieving this is to use “round-robin DNS” as a primitive form of clustering for the Web. Setting up such a system is simple: Give several machines the same DNS alias (also called a CNAME). For example, if you have five machines, named www1.company.com through www5.company.com, you would give each of those machines the alias www.company.com.

The main difficulty with this solution is figuring out how to replicate content among the servers. One interesting approach is to use the “reverse proxy” feature in the latest version of Netscape’s proxy server. Using this feature, the machines www1 through www5 would be proxy servers instead of Web servers and would get the original content from a Web server (possibly one that is inside your firewall).

Until the Web evolves a distributed mechanism of its own, or until vendors begin offering easier forms of Web server clustering and replication, distributing your load will be an exercise in experimentation. The benefits, however, are worth it.

Tags: ,

Mastering The Distribution

September 3, 2013
By admin in Distributed Computing

When playing golf, do you just want to get the ball in the hole, or do you want to be a scratch golfer? It all boils down to a commitment to hone your skills and master the terrain. The same rules apply in distributed computing. Last week we drew analogies between the different types of golf–from putt-putt (or miniature golf) to real golf–and the different partitioning schemes used in distributed computing.

mdWhen it comes down to this new environment, however, most of us are playing the client/server equivalent of miniature golf. But if you really want to play with the pros, object partitioning is the way to go.

I find it fascinating that while everyone is talking about component-based computing, there is no clear, comprehensive strategy in place. This is what object partitioning will provide: a technology and transport mechanism for distributing objects and methods that we may have acquired from multiple sources.

Distributed application components typically encompass the user interface, business logic, and transactional integrity logic. We place these components in different tiers for many reasons, most often to improve performance and simplify maintenance. It’s easy to throw stuff around. But to succeed in distributed computing, you need the right tools, architecture, and staff roles. Skip any of these, and you are in a serious sand trap!

Today’s distributed environments assume that an object and all of its methods are all located on a single machine. But it would be far more interesting if an object’s methods could be dispersed across several machines. Then, the user interface of an object could be on the client, its business logic methods running on an application server or in a database, and its transactional integrity logic housed in the database.

To make this happen, client/server developers need to ramp up their skills. Business analysts will have to learn object-oriented analysis and design methods and tools. Features programmers will evolve into assemblers of components either purchased or written by a back-end programmer. The back-end programmer–who will write the business-level objects and encapsulate the business rules in object methods, structures, and relationships–will need to master object-oriented design and programming tools and techn iques.

And as databases evolve to become object relational and based on object request brokers, the database analyst will also need to evolve her skill set.

So let me help you reduce your handicap. Send me your address, and I’ll ship out a partitioning starter kit, complete with a test and answers to determine the right distributed computing architecture for your users and business needs. I’ll also toss in a Rules of Roles poster to ensure you have the right skill sets on your projects.

To succeed in distributed computing, you need the right tools, architecture, and staff roles.

Tags: ,

Survey Says…

September 1, 2013
By admin in Distributed Computing

Distributed systems give users access to data and applications they never enjoyed in the mainframe world. Unfortunately, those applications also present major headaches to security-conscious information systems managers.

Not only are distributed systems harder to manage, they are more vulnerable to a range of assaults, from inadvertent misuse to malicious attacks by outside hackers and employees.

In a recent survey by Datapro Information Services Group in Delran, N.J., only 54% of 1,337 information security professionals said they had written security policies. That’s down from 82% in a 1992 survey. According to Datapro, the drop is due in part to the trend toward distributed systems.

Ted Combs, manager of computer security at AlliedSignal Aerospace Co. in Kansas City, Mo., said his company has a highly decentralized computing environment but has written security policies to protect it. He said the company’s security model is to have centralized IS professionals develop security standards, then have business units implement them.

“But we go out periodically and survey their machines to make sure they have lived up to the standards,” Combs added.

Security standards range from technical parameters, such as which protocols or services an Internet firewall should block, to policies such as scanning all diskettes for viruses.

Combs said AlliedSignal set up a companywide team of network administrators that includes people from central IS and the business units. Such teams are a good way to get user “buy-in” on security, he said.

That’s the approach large organizations should take, said Stephen T. Kent, chief scientist for security technology at BBN Corp. in Cambridge, Mass. “Then a certain distribution of security responsibility is appropriate,” he said.

AUDIT TRAIL

atAmoco Corp. sets security policy from within its 2,000-person centralized Information Technology Shared Services unit, and it dispatches auditors periodically to business units to check on compliance with those policies. But auditing is solabor-intensive that any one person or computer might get checked out only every other year, said Steve Ferguson, project manager of the Distributed Computing Initiative.

That means self-auditing by users is necessary, Ferguson said. “We try to get people to understand security is important,” he said. “And we try to give them security tools that are not too onerous or too disruptive.”

Sometimes users fall out of compliance with security standards through no fault of their own. For example, Ferguson said, screen savers that protected PCs by locking them up after a period of inactivity no longer worked when users migrated from Windows 3.1 to Windows 95. Those little details need to be looked for in audits, he said.

In a recent survey of 205 Fortune 1,000 companies, WarRoom Research LLC in Baltimore found that 72% to 87% had security policies of various types, but the effectiveness of those policies was called into question by other survey findings.

Nearly half the companies said their networks had been penetrated by outsiders during the past 12 months, and 27% said they had no ability to detect unauthorized access to their systems. Of those suffering attacks, 58% said their systems had been penetrated more than 10 times.

Tags: ,

Internet Pushers

August 8, 2013
By admin in Distributed Computing

Don’t let the old cattle barn at Alden Electronics fool you into thinking this is just another sleepy New England farm. The garden of satellites next to the silo is a dead giveaway that Alden is doing more than pushing tractors in a field. In fact, this marine electronics and weather data distributor is tilling a new client/serverlike application that will deliver up-to-the-minute weather information over the Internet to customers. If all goes well, it could eventually put the satellites out to pasture.

ipAlden isn’t the only company pushing the Internet beyond its limits. Companies such as brokerage firms and nuclear fusion research houses are discovering that the Internet’s most compelling draw is not catalog shopping in cyberspace, but rather the lessons IT managers can learn when it comes to designing client/server systems. Internetlike distributed architectures–which use brokers to pass requests along between clients and servers–can be developed and maintained more easily than traditional architectur es, with the added benefits of code reuse and robust performance, experts say.

“In the next 20 to 24 months after people … don’t think the cybermall is the Holy Grail anymore, [they will see] that the real strength of the Internet is corporate communications,” says Jim Medalia, president of DXX’s Internet Business Center, an Internet service provider and consultancy, in New York. With concerns over security and reliability abating, Medalia and other Internet access providers say a growing number of corporate clients are exploring the possibility of distributing client/server applic ations over the Net.

Old Reliable

Alden is taking steps toward that goal. The Westboro, Mass., firm has been delivering large graphics and real-time satellite images to a test group of university customers over the Internet for months “without any trouble,” says William Highlands, manager of data communications systems. Alden’s application, which uses Denver-based Unidata Inc.’s Local Data Manager on both the “client” and “server,” has been up and running over the Internet since last September. The LDM software transmits large amounts of a lphanumeric and graphical data over the Internet and packages it so that the client software can identify and reconstruct what was sent from the transmitting station.

Before Internet transmission becomes Alden’s modus operandi for more critical customers, such as the military and commercial airlines, Highlands admits that security and control issues will have to be more fully ironed out. But there’s a distinct possibility that the Internet could give Alden’s satellite system a run for the money. “[We'll] be able to reach more customers with less fuss,” says Highlands about the new Internet delivery system.

Cost and return on investment will also be a crucial factor before Alden’s satellites are replaced by the infobahn. According to President Arnold Kraft, small satellites and receivers cost the same as–or less than–an Internet server and software. “But that looks like it could change,” he says.

Call Your Broker

One element of the Internet that should be put to good use by application architects is its use of brokers–software that can handle requests for data or other services without knowing a lot of detail about either the requester or provider of the service. Cliff Commington, vice president of marketing at BBN Planet Inc., in Cambridge, Mass., says that architecture is critical. “The Internet is just an applications platform. What do you do with it? You build applications,” he says.

For example, each World-Wide Web server on the Net is in essence a broker, which uses hypertext links to provide information or pass on requests for data to another Web site even if it knows little about the systems at either end. Even if an application isn’t running over the Internet, a distributed, broker-based architecture can pay off in spades with reduced development time and increased flexibility.

That’s been the case at T. Rowe Price, in Owings Mills, Md., which is using the broker approach for the next version of its Client Access Inquiry System. This application, which lets customers dial in to mainframe databases to check financial information, such as the performance of funds, employs broker services by Open Environment Corp.’s Entera development tool. The system works in a three-tiered environment of clients, functionality servers, and database servers.

The brokers provided by Entera are the key to this architecture, says system architect Kirk Kness. They work alongside the three- tiered architecture, “like a big yellow pages,” keeping track of which application services are available on which servers and telling client applications where to find what they need. Because none of that information needs to be coded into the three tiers, it is easier to port the services to other platforms. It also reduces the need for cumbersome updates to the clients, Kness says.

T. Rowe Price’s next step will be to deploy the application over the Internet, which the company hopes to do as soon as the end of this year. “This application could run across the Internet with no changes,” says Kness, who added that it will “when we get good security and good encryption.”

Rocket Science

One of the ultimate challenges will be to actually distribute application code across the Internet. Researchers at Massachusetts Institiute of Technology have taken a major step in this direction, with applications that help control the Alcator C-Mod tokamak, or experimental fusion reactor, at the school’s Cambridge, Mass., campus.

Tokamaks create extremely high-temperature plasmas to find ways to create extremely low-cost fusion power. With shrinking research budgets, there’s less money to build tokamaks, meaning researchers around the globe need to share existing ones. One solution would be to control tokamaks from a remote site, cutting down the cost and inconvenience of sharing.

That’s where the Internet comes in: This spring, MIT researchers working from the Lawrence Livermore National Laboratory, in Livermore, Calif., for the first time used ESnet (Energy Science Network), a subset of the Internet, to control and monitor experiments at MIT’s reactor in Cambridge.

The California researchers used a combination of Hewlett-Packard Co. workstations and a Silicon Graphics Inc. Indigo workstation, equipped with a camera to provide full-motion videoconferencing. ESnet also provided a 200K-bps link to Digital Equipment Corp. workstations in Cambridge, which control the tokamak, says staff scientist Steve Horne of MIT’s Plasma Fusion Center.

“We did not simply have people talking to each other,” says Horne. “We had the control panel of the machines and the displays on the [workstations] out there [on the Net].”

After solving an early problem with an intermediate node on ESnet that was dropping packets, response time was fast enough that the remote researchers could work as quickly as researchers in Cambridge– setting up one shot every 15 minutes. The setup even allowed for application partioning over the Net. Users could move data or logic between different platforms to avoid network bottlenecks simply by switching between X Windows screens on their workstations.

The lesson from these companies’ pioneering efforts: Whether you bring applications to the Internet, it’s smart to start bringing a little bit of Internet design to applications.

By admin in Middleware

Software AG will make products available that ease integration of legacy applications with applications running on Microsoft platforms by encapsulating the legacy applications in OLE interfaces. These will be extensions to Software AG’s Entire middleware products.

swag“The objective overall is to build the market for OLE,” said Dan Neault, Microsoft senior product manager. “The market’s already strong, but it’s limited currently to Windows platforms. So the market forces of increased availability, increased quality and decreased price will be kicking in a much larger scale.”

Also, “Software AG will work together with Microsoft to provide consulting, systems integration and support worldwide, so that customers can take advantage of the technology,” Neault said. “This is a capability that we recognized that we needed, and Software AG is uniquely qualified to do this because of their expertise cross-platform. We looked around and could find no other company that had the same degree of excellence in doing this sort of cross-platform porting. They are quite OLE-smart.”

In addition, Software AG has a heritage as a mainframe company that reinvented itself around distributed computing.

“The further we dug into this, the more we realized there was a confluence of interests. We both want the technology to converge. We both want a strong middleware model,” Neault said.

Software AG has committed to adopt OLE-integration technology.

“We’re going to continue to develop and to modify a number of our existing connectivity tools to enhance the overall integration of OLE technology into our overall product set,” said David McSwain, vice president of marketing and technology at Software AG.

The OLE-integration technology will be made available to the dominant flavors of Unix, the AS/400 and MVS systems, “as well as some geographic-specific ports like ICL for the [U.K.] marketplace, Fujitsu for the Pacific Rim marketplace, Siemens-Nixdorf for the European marketplace and so forth,” McSwain said.

Software AG also has two OLE-based products that are well into the development cycle. In the first half of 1996, it will release an OLE-automation product that will allow a Microsoft-based tool or application to plug into a native OLE interface and access services, databases and transactions that reside on an IBM MVS mainframe on the back end. The OLE-automation component, or front-end, is being engineered onto the company’s existing middleware.

In addition, “we have a product family called Natural, a 4GL for building mission-critical systems. We’re in the process of developing a release slated for the fourth quarter of 1996 of a version of Natural with a full-blown OLE and OCX environment,” said McSwain.

These two products are independent of the alliance and are interim stepping stones for customers and prospects to move in that direction.

Network OLE and Software AG’s product integration will enable building of either a single-node system on any of the platforms using Network OLE, or one can bolt together multiple platforms running Network OLE as a common backbone for runtime “componentware” and object-based runtime systems.

Gerhardt Beyer, Software AG’s technical director and objects guru, said Software AG has been approached by consulting companies and system-integrators that are interested in using tools that are a widely accepted industry standard in their consulting projects.

“If you can show them that on one end you have OLE-compliant interfaces that can play with all the desktop tools and use that in your integration projects, that is an advantage for these consulting companies,” Beyer said.

Tags: ,