PC Tuning for Optimum Megascenery and Flight Simulator

Total Page:16

File Type:pdf, Size:1020Kb

PC Tuning for Optimum Megascenery and Flight Simulator 10 Windows XP Performance Tweaks For Optimizing Performance of MegaScenery, Flight Simulator 2004 and Your Overall Computing Experience. A lot of people purchase and install MegaScenery and are disappointed with the results. They ordered this hyped up scenery that when they try it doesn’t give them the results – as advertised. Their MegaScenery doesn’t display properly, the get the “blurries” where the full resolution scenery does not come into view fast enough. Often they blame it on MegaScenery…. but the real culprit is a poorly tuned and far from optimized PC. Flight Simulator 2004’s scenery display engine does have its shortfalls when running scenery as demanding as MegaScenery but when you perform these tweaks below (in combination with tips supplied with MegaScenery) you can overcome these limitations. However, not only do you improve your performance of MegaScenery, but also your overall performance of Flight Simulator 2004 and overall computing experience. A computer does need to be tuned to optimize its performance. Out of the box, Windows XP does not give you a fully tuned operating system and is wrought with settings that bottleneck performance. Follow these Tweaks below and you will have a finely tuned system that gives you blazing performance. The first 8 tweaks do not cost you anything and are just a few changes you can make from within Windows. The final 2 tweaks involve you spending a little bit of money but are well worth the investment. Three Warnings and recommendations Before you perform these steps, please: 1. Ensure that you are familiar with REGEDIT. Incorrectly using Regedit could damage your Windows registry and leave your system unbootable and unrecoverable. If you are not confident using Regedit, have a friend who is confident perform these tweaks for you. 2. Back up any important data. These tweaks are tried and tested but if something goes wrong you could leave your system unbootable. 3. If you are unsure of what you are doing – get a technician or friend who is sure of what they are doing to perform these tweaks. Firstly – Be Running the NTFS File System Firstly, however, we’ll assume your hard disk is operating NTFS (New Technology File System). If your system came bundled with XP then it most likely is. If you upgraded your previous version of Windows to Windows XP, then your file system may still be using the FAT32 file system. If it is then you need to convert it to NTFS. While this will convert your hard disk to NTFS however your cluster size will not be optimized. Converting your hard disk from FAT32 to NTFS gives you 512 byte cluster sizes which is 1 not the optimum and will give you less than optimum hard disk performance since 512 byte clusters result in higher hard drive overhead. The recommended cluster size is 4096 bytes. If your hard disk is not using this cluster size then you should reformat your system using this cluster size which is the Windows Format default cluster size. Alternatively Partition Magic will let you change your cluster size for an already formatted hard drive without requiring a reformat. If you do not want to go through a complete reformat but do want to convert to NTFS if you are running FAT32 then go to the Command Prompt – START => All Programs => Accessories => Command Prompt. To confirm which file system you are running at the C: prompt type CHKDSK. The first line will tell you which file system you are running. If you are running FAT32 then to convert to NTFS enter the following: CONVERT C: /FS:NTFS The conversion will take about 30 minutes. Repeat this for other hard disk volumes you may have e.g. D: drive or E: drive. We suggest that before you do the Convert process that you search for this in your Windows Help and Support option and read any documentation relating to Converting a FAT32 drive to NTFS and the implications of converting to NTFS. Secondly – Ensure that your NTFS File System Cluster Size is 4096 Bytes. See the conversion process in the above section. If you convert a disk from FAT32 to NTFS you will need a utility such as Partition Magic to change your cluster size to 4096 byte cluster sizes. If you are freshly formatting a hard drive – select cluster size as 4096 bytes. Once you have successfully set up your hard disk as NTFS – prepare for these Tweaks that will boost your system performance – Out of Site! Here We Go! Tweak #1 – Disable Last File Access Stamping By default, every time Windows XP reads a file, its stamps the file with the date and time of last access. Whilst it can serve a purpose, it is generally not required for everyday computer use. It hogs resources by writing to a file each time it is accessed. So every read operation is also accompanied by a write operation of a few bytes. To turn this off go to the command prompt: At the command prompt enter the following: FSUTIL BEHAVIOR SET DISABLELASTACCESS 1 2 If you ever want to turn it back on again simply retype the command replacing the 1 with a 0 (zero). Reboot your computer. Tweak #2 – Turn Off Windows Indexing While it’s great for finding files faster when you are doing a search, Windows file indexing is always working in the background indexing files and thus contributing significantly to hard disk overhead. It is not worth the performance decrease it causes. To turn it off go to Control Panel => Add/Remove Programs => Windows Components. Then deselect Indexing Service. Or Go to START=> RUN – then type in services.msc. Scroll to Indexing Service and select disable. Tweak #3 – Increase Your File System Cache WARNING - DO NOT USE THIS TWEAK IF YOU ARE USING AN ATI VIDEO CARD. IT COULD LEAVE YOUR SYSTEM UNBOOTABLE. On a typical high end system such as a 2.4 GHz Pentium a hard disk will transfer data at around 30-40 MegaBytes per Second. That’s pretty fast! How would you like hard disk transfer bursts of around 1 GigaByte per second? You can achieve this easily using any version of Windows as it’s already part of the file system… that is file caching where data that has been read from the hard disk remains in memory so that if it is required again before it is flushed from the cache it reads directly from the cache for these super fast data access rates. That’s right it is already part of the file system but the recommended default setting (the setting you might already be using) is not necessarily the best. The default setting in Windows is to allocate more memory to programs and less to the file cache but this is wasting precious memory that could be used for file caching instead of not being used at all. If, for example, you are running a 512 Mb RAM, and you have a program or two loaded, you have around 300 MB that isn’t being used AT ALL. It’s being reserved partly for file caching and partly for any programs you might load with the priority being for any programs you might load. It will not allocate all of that memory to the file cache and it is VERY RARE that you would use all of that memory for programs. So you could either waste that memory and have degraded system performance or you could use that memory for file caching. USE IT FOR FILE CACHING! After you make the change below, you will notice after you reboot and start using your system greatly reduced hard disk access for both read and write operations which is where the biggest bottleneck in computing is. Performance benefits of this tweak are seen no matter how little RAM you have but are more pronounced with the more RAM you have. We suggest no less than 512 Mb and 1 Gigabyte is better. 3 Perform the following: CONTROL PANEL => SYSTEM => ADVANCED => PERFORMANCE => SETTINGS => ADVANCED => MEMORY USAGE. • Change the option from Programs to System Cache. • Reboot your computer. Tweak #4 – Increase CPU Priority This will increase the CPU priority for programs running in the foreground. E.g. Your Microsoft Flight Simulator 2004. • Run Regedit • Find This Key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\PriorityControl\ Set the Win32PrioritySeparation to 38 DECIMAL (or 26 HEX) Tweak #5 – Launch Folder Windows In Separate Process This tweak does not only apply to folders but also programs. Basically when you run a program it gets its own memory allocation. It also gives you a much more stable system. Programs run faster but it does also utilize more RAM. Set the "Launch folder windows in separate process" setting by opening the "My Computer\Tools\Folder Options\View -and tick the "Launch folder windows in separate process" box. Window XP will open any new window (including program) in its own memory and in separate process priority. (increase stability and speed but use much more RAM than before). Tweak #6 Defrag Your System Hard Disk Fragmentation is a contributor to slower performance. We suggest a regular defrag of your system. A Weekly schedule is a good one. Windows XP has a built-in defragger or you can opt for a third party product such as PerfectDisk (www.raxco.com ), Diskeeper (www.diskeeper.com ) or O&O Defrag V8 (www.oo-software.com ). Each of these defraggers have their own competitive advantages over the other. The best defragger to use if you are using MegaScenery is O&O Defrag because it does perform one ideal type of defragmentation that clearly does improve MegaScenery performance.
Recommended publications
  • Tzworks Prefetch Parser (Pf) Users Guide
    TZWorks® Prefetch Parser (pf) Users Guide Abstract pf is a standalone, command-line tool that can parse Windows prefetch files. From a forensics perspective, the prefetch file offers the analyst information about the applications that were executed, their location and the frequency. pf runs on Windows, Linux and Mac OS-X. Copyright © TZWorks LLC www.tzworks.com Contact Info: [email protected] Document applies to v1.33 of pf Updated: Aug 5, 2021 Table of Contents 1 Introduction .......................................................................................................................................... 2 2 How to Use pf........................................................................................................................................ 2 2.1 Handling Volume Shadow Copies ................................................................................................. 5 2.2 Other Output Options ................................................................................................................... 6 3 Known Issues ......................................................................................................................................... 7 4 Available Options .................................................................................................................................. 8 5 Authentication and the License File .................................................................................................... 10 5.1 Limited versus Demo versus Full in the tool’s Output
    [Show full text]
  • Windows Internals, Sixth Edition, Part 2
    spine = 1.2” Part 2 About the Authors Mark Russinovich is a Technical Fellow in ® the Windows Azure™ group at Microsoft. Windows Internals He is coauthor of Windows Sysinternals SIXTH EDITION Administrator’s Reference, co-creator of the Sysinternals tools available from Microsoft Windows ® The definitive guide—fully updated for Windows 7 TechNet, and coauthor of the Windows Internals and Windows Server 2008 R2 book series. Delve inside Windows architecture and internals—and see how core David A. Solomon is coauthor of the Windows Internals book series and has taught components work behind the scenes. Led by a team of internationally his Windows internals class to thousands of renowned internals experts, this classic guide has been fully updated Windows developers and IT professionals worldwide, SIXTH for Windows 7 and Windows Server® 2008 R2—and now presents its including Microsoft staff. He is a regular speaker 6EDITION coverage in two volumes. at Microsoft conferences, including TechNet As always, you get critical, insider perspectives on how Windows and PDC. operates. And through hands-on experiments, you’ll experience its Alex Ionescu is a chief software architect and internal behavior firsthand—knowledge you can apply to improve consultant expert in low-level system software, application design, debugging, system performance, and support. kernel development, security training, and Internals reverse engineering. He teaches Windows internals courses with David Solomon, and is ® In Part 2, you will: active in the security research community.
    [Show full text]
  • This Article Is the Second Part of a Three Part Series
    12/31/2020 Windows Administration: Inside the Windows Vista Kernel: Part 2 | Microsoft Docs 09/08/2016 • 23 minutes to read This article is the second part of a three part series. Click here to read Part One. Click here to read Part Three. Windows Administration Inside the Windows Vista Kernel: Part 2 Mark Russinovich At a Glance: Memory management Startup and shutdown Power management Last month, in the first installment of this three-part series, I looked at Windows Vista kernel enhancements in the areas of processes and I/O. This time I'll cover advances in the way Windows Vista manages memory, as well as major improvements to system startup, shutdown, and power management (Part One). Every release of Windows® improves scalability and performance, and Windows Vista™ is no different. The Windows Vista Memory Manager includes numerous enhancements, like more extensive use of lock-free synchronization techniques, finer-grained locking, tighter data-structure packing, larger paging I/Os, support for modern GPU memory architectures, and more efficient use of the hardware Translation Lookaside Buffer. Plus, Windows Vista memory management now offers dynamic address space allocation for the requirements of different workloads. Four performance-enhancing features that use new technologies make their operating system debut on Windows Vista: SuperFetch, ReadyBoost, ReadyBoot, and ReadyDrive. I'll discuss them in detail later in this article. Dynamic Kernel Address Space https://docs.microsoft.com/en-us/previous-versions/technet-magazine/cc162480(v=msdn.10) 1/15 12/31/2020 Windows Administration: Inside the Windows Vista Kernel: Part 2 | Microsoft Docs Windows and the applications that run on it have bumped their heads on the address space limits of 32-bit processors.
    [Show full text]
  • Tuning IBM System X Servers for Performance
    Front cover Tuning IBM System x Servers for Performance Identify and eliminate performance bottlenecks in key subsystems Expert knowledge from inside the IBM performance labs Covers Windows, Linux, and VMware ESX David Watts Alexandre Chabrol Phillip Dundas Dustin Fredrickson Marius Kalmantas Mario Marroquin Rajeev Puri Jose Rodriguez Ruibal David Zheng ibm.com/redbooks International Technical Support Organization Tuning IBM System x Servers for Performance August 2009 SG24-5287-05 Note: Before using this information and the product it supports, read the information in “Notices” on page xvii. Sixth Edition (August 2009) This edition applies to IBM System x servers running Windows Server 2008, Windows Server 2003, Red Hat Enterprise Linux, SUSE Linux Enterprise Server, and VMware ESX. © Copyright International Business Machines Corporation 1998, 2000, 2002, 2004, 2007, 2009. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Contents Notices . xvii Trademarks . xviii Foreword . xxi Preface . xxiii The team who wrote this book . xxiv Become a published author . xxix Comments welcome. xxix Part 1. Introduction . 1 Chapter 1. Introduction to this book . 3 1.1 Operating an efficient server - four phases . 4 1.2 Performance tuning guidelines . 5 1.3 The System x Performance Lab . 5 1.4 IBM Center for Microsoft Technologies . 7 1.5 Linux Technology Center . 7 1.6 IBM Client Benchmark Centers . 8 1.7 Understanding the organization of this book . 10 Chapter 2. Understanding server types . 13 2.1 Server scalability . 14 2.2 Authentication services . 15 2.2.1 Windows Server 2008 Active Directory domain controllers .
    [Show full text]
  • Forensic Implications of Windows Vista
    Forensic Implications of Windows Vista Barrie Stewart September 2007 Abstract Windows XP was launched in 2001 and has since been involved in many digital investigations. Over the last few years, forensic practitioners have developed a thorough understanding of this operating system and are fully aware of any challenges it may create during an investigation. Its successor, Windows Vista, was launched in January 2007 and is fast on its way to becoming the platform of choice for new PCs. Vista introduces many new technologies and refines a number of features carried over from XP. This report focuses on some of these technologies and, in particular, what effect they have on digital investigations. Acknowledgements Thanks go to Ian Ferguson for arranging the placement at QinetiQ where the majority of this research was conducted. Also, thanks to Phil Turner and the staff at QinetiQ for providing valuable support and insight during the course of the research. i Table of Contents Abstract .......................................................................................................... i Acknowledgements ..................................................................................... i Table of Contents ....................................................................................... ii Chapter 1 - Introduction............................................................................ 1 1.1 - Overview .................................................................................................................. 1 1.2 –
    [Show full text]
  • Tuning Windows 7'S Performance
    Microsoft Windows 7 Unleashed, Paul McFedries, Sams Publishing, 0672330695, July, 201 CHAPTER 6 IN THIS CHAPTER . Monitoring Performance Tuning Windows 7’s . Optimizing Startup Performance . Optimizing Applications . Optimizing the Hard Disk . Optimizing Virtual Memory Now, here, you see, it takes all the running you can do, to stay in the same place. If you want to get somewhere else, you must run at least twice as fast as that! —Lewis Carroll, Through the Looking Glass We often wonder why our workaday computer chores seem to take just as long as they ever did, despite the fact that hardware is generally more reliable and more powerful than ever. The answer to this apparent riddle comes in the form of McFedries’ law of computing codependence: The increase in software system requirements is directly proportional to the increase in hardware system capabilities. For example, imagine that a slick new chip is released that promises a 10% speed boost; software designers, seeing the new chip gain wide acceptance, add 10% more features to their already bloated code to take advantage of the higher perfor- mance level. Then another new chip is released, followed by another software upgrade—and the cycle continues ad nauseum as these twin engines of computer progress lurch codependently into the future. So, how do you break out of the performance deadlock created by the immovable object of software code bloat meeting the irresistible force of hardware advancement? By optimizing your system to minimize the effects of over- grown applications and to maximize the native capabilities of your hardware. Of course, it helps if your operating system gives you a good set of tools to improve and monitor performance, diagnose problems, and keep your data safe.
    [Show full text]
  • Sample Chapters from Windows Internals, Sixth Edition, Part 1
    spine = 1.2” Part 1 About the Authors Mark Russinovich is a Technical Fellow in ® the Windows Azure™ group at Microsoft. Windows Internals He is coauthor of Windows Sysinternals SIXTH EDITION Administrator’s Reference, co-creator of the Sysinternals tools available from Microsoft Windows ® The definitive guide—fully updated for Windows 7 TechNet, and coauthor of the Windows Internals and Windows Server 2008 R2 book series. Delve inside Windows architecture and internals—and see how core David A. Solomon is coauthor of the Windows Internals book series and has taught components work behind the scenes. Led by a team of internationally his Windows internals class to thousands of renowned internals experts, this classic guide has been fully updated Windows developers and IT professionals worldwide, SIXTH for Windows 7 and Windows Server® 2008 R2—and now presents its including Microsoft staff. He is a regular speaker 6EDITION coverage in two volumes. at Microsoft conferences, including TechNet As always, you get critical, insider perspectives on how Windows and PDC. operates. And through hands-on experiments, you’ll experience its Alex Ionescu is a chief software architect and internal behavior firsthand—knowledge you can apply to improve consultant expert in low-level system software, application design, debugging, system performance, and support. kernel development, security training, and Internals reverse engineering. He teaches Windows internals courses with David Solomon, and is ® In Part 1, you will: active in the security research community.
    [Show full text]
  • Rapid Prototyping and Evaluation of Intelligence Functions of Active Storage Devices
    2356 IEEE TRANSACTIONS ON COMPUTERS, VOL. 63, NO. 9, SEPTEMBER 2014 Rapid Prototyping and Evaluation of Intelligence Functions of Active Storage Devices Yongsoo Joo, Member, IEEE, Junhee Ryu, Sangsoo Park, Member, IEEE, Heonshik Shin, Member, IEEE, and Kang G. Shin, Fellow, IEEE Abstract—Active storage devices further improve their performance by executing “intelligence functions,” such as prefetching and data deduplication, in addition to handling the usual I/O requests they receive. Significant research has been carried out to develop effective intelligence functions for the active storage devices. However, laborious and time-consuming efforts are usually required to set up a suitable experimental platform to evaluate each new intelligence function. Moreover, it is difficult to make such prototypes available to other researchers and users to gain valuable experience and feedback. To overcome these difficulties, we propose IOLab, a virtual machine (VM)-based platform for evaluating intelligence functions of active storage devices. The VM-based structure of IOLab enables the evaluation of new (and existing) intelligence functions for different types of OSes and active storage devices with little additional effort. IOLab also supports real-time execution of intelligence functions, providing users opportunities to experience latest intelligence functions without waiting for their deployment in commercial products. Using a set of interesting case studies, we demonstrate the utility of IOLab with negligible performance overhead except for the VM’s virtualization overhead. Index Terms—Active storage device, intelligence function, device emulation 1INTRODUCTION TORAGE devices have constantly been upgraded with cut- Although the evolution of active storage devices offers Sting-edge technologies to perform “something” more than more opportunities for researchers and developers, it has just handling the usual I/O requests they receive, in order to been accompanied with increasing difficulties in setting up improve their performance.
    [Show full text]
  • FAST: Quick Application Launch on Solid-State Drives
    FAST: Quick Application Launch on Solid-State Drives Yongsoo Joo†, Junhee Ryu‡, Sangsoo Park†, and Kang G. Shin†∗ †Ewha Womans University, 11-1 Daehyun-dong Seodaemun-gu, Seoul 120-750, Korea ‡Seoul National University, 599 Kwanak-Gu Kwanak Rd., Seoul 151-744, Korea ∗ University of Michigan, 2260 Hayward St., Ann Arbor, MI 48109, USA Abstract release, their launch takes longer even if a new, power- ful machine equipped with high-speed multi-core CPUs Application launch performance is of great importance and several GBs of main memory is used. This undesir- to system platform developers and vendors as it greatly able trend is known to stem from the poor random access affects the degree of users’ satisfaction. The single most performance of hard disk drives (HDDs). When an ap- effective way to improve application launch performance plication stored in a HDD is launched, up to thousands is to replace a hard disk drive (HDD) with a solid state of block requests are sent to the HDD, and a significant drive (SSD), which has recently become affordable and portion of its launch time is spent on moving the disk popular. A natural question is then whether or not to head to proper track and sector positions, i.e., seek and replace the traditional HDD-aware application launchers rotational latencies. Unfortunately, the HDD seek and with a new SSD-aware optimizer. rotational latencies have not been improved much over We address this question by analyzing the inefficiency the last few decades, especially compared to the CPU of the HDD-aware application launchers on SSDs and speed improvement.
    [Show full text]
  • Forensic Examination and Analysis of the Prefetch Files on the Banking Trojan Malware Incidents
    Edith Cowan University Research Online Australian Digital Forensics Conference Conferences, Symposia and Campus Events 2014 Forensic examination and analysis of the Prefetch files on the banking Trojan malware incidents Andri P. Heriyanto Edith Cowan University Follow this and additional works at: https://ro.ecu.edu.au/adf Part of the Computer Engineering Commons, and the Information Security Commons DOI: 10.4225/75/57b40250fb894 12th Australian Digital Forensics Conference. Held on the 1-3 December, 2014 at Edith Cowan University, Joondalup Campus, Perth, Western Australia. This Conference Proceeding is posted at Research Online. https://ro.ecu.edu.au/adf/132 FORENSIC EXAMINATION AND ANALYSIS OF THE PREFETCH FILES ON THE BANKING TROJAN MALWARE INCIDENTS Andri P Heriyanto School of Computer and Security Science, Edith Cowan University, Perth, Australia [email protected] Abstract Whenever a program runs within the operating system, there will be data or artefacts created on the system. This condition applies to the malicious software (malware). Although they intend to obscure their presence on the system with anti-forensic techniques, still they have to run on the victim’s system to acquire their objective. Modern malware creates a significant challenge to the digital forensic community since they are being designed to leave limited traces and misdirect the examiner. Therefore, every examiner should consider performing all the forensics approaches such as memory forensic, live-response and Windows file analysis in the related malware incidents to acquire all the potential evidence on a victim’s system. There is a challenge when an examiner only has an option to perform post-mortem forensic approach.
    [Show full text]
  • Tuning Windows Vista's Performance
    18_0672328941_ch14.qxd 11/16/06 6:00 PM Page 381 CHAPTER 14 IN THIS CHAPTER . Vista’s Performance Tuning Windows Vista’s Improvements . Monitoring Performance Performance . Optimizing Startup . Optimizing Applications . Optimizing the Hard Disk We often wonder why our workaday computer chores seem to take just as long as they ever did, despite the fact . Optimizing Virtual Memory that hardware is generally more reliable and more powerful than ever. The answer to this apparent riddle relates to Parkinson’s Law of Data, which I mentioned back in Chapter 1, “An Overview of Windows Vista.” On a more general level, a restatement of Parkinson’s Law is as follows: The increase in software system requirements is directly propor- tional to the increase in hardware system capabilities. For example, imagine that a slick new chip is released that promises a 30% speed boost; software designers, seeing the new chip gain wide acceptance, add 30% more features to their already bloated code to take advantage of the higher performance level. Then another new chip is released, followed by another software upgrade—and the cycle continues ad nauseum as these twin engines of computer progress lurch co-dependently into the future. So, how do you break out of the performance deadlock created by the immovable object of software code bloat meeting the irresistible force of hardware advancement? By optimizing your system to minimize the effects of over- grown applications and to maximize the native capabilities of your hardware. Of course, it helps if your operating system gives you a good set of tools to improve and monitor performance, diagnose problems, and keep your data safe.
    [Show full text]