Psp Cps2 Cache Files Wikipedia
- Nintendo 3DS
- Nintendo DS
- Nintendo Wii
- Nintendo Wii U
- Nintendo Switch
- Computers
- Sony PSP
- Sony PS Vita
- Sony PlayStation 3
- Sony PlayStation 4
- Microsoft Xbox 360
- Microsoft Xbox One
- Other Systems
- Site Discussions
- Other Discussions
Psp Cps2 Cache Files Wikipedia 1
Site & Scene News
CD Projekt Red goes on a deep dive to debut 15 minutes of new Cyberpunk 2077 gameplay
One of the most hotly anticipated games, Cyberpunk 2077, just got a sizable trailer, showing off some never before seen gameplay footage. The 15-minute video goes on a 'deep dive', which helps give...
Aug 30, 2019 at 11:10 PM by CharyHatsune Miku: Project DIVA Mega Mix confirmed for a western launch on the Nintendo Switch
Securing a release in North America is the latest Hatsune Miku game. Originally revealed last month in Japan as Project DIVA Mega39's, SEGA of America will be localizing the game for the west with the...
Aug 30, 2019 at 8:41 PM by CharyUntitled Goose Game releases on the 20th of September this year
A stealth game which has you play as a Goose, developed by House House, just got its release date of the 20th of September. At launch it will release on both the Switch and PC.A description of the...
Aug 30, 2019 at 7:40 PM by JiehfengCemu v1.15.13b released
Ladies and gentleman, start your updaters! The Cemu developers have just released a new public version of their popular Wii U emulator: 1.15.13b. While the Vulkan renderer may have not been included...
Aug 30, 2019 at 7:18 PM by RattletraPMGBAtemp Friday Game Nights - Monster Hunter Generations Ultimate
The third week of our new Friday game nights will commence soon! Join us tomorrow, as we'll be hosting some Monster Hunter GU action on the Nintendo Switch. Let's hunt some monsters in preparation of...
Aug 29, 2019 at 2:54 PM by CharyThe latest Resident Evil game is codenamed Project Resistance, and will be revealed on September 9th
Following on the heels of Gamescom is the Tokyo Game Show, where many Japanese games get trailers and reveals. One of those announcements will be a brand new Resident Evil game, which is being teased...
Aug 29, 2019 at 10:42 AM by CharyYakuza 7 announced for worldwide release in 2020, is a turn-based RPG
The next Yakuza title was bound to bring about some major changes, considering this is the first mainline entry that won't feature Kiryu as a protagonist, but it seems SEGA has some wild ideas up...
Aug 29, 2019 at 7:47 AM by CharyPS3 gets new firmware update for version 4.85
Sony’s on a roll, bringing firmware updates to not just one, but two of its legacy systems. Released today was OFW 4.85 for the PlayStation 3, which “improves system performance”. If you’re...
Aug 29, 2019 at 1:36 AM by CharyShovel Knight Dig announced, original game to get quality of life updates on all platforms
When it comes to dedicated indie development teams, Yacht Club Games is definitely up there, having provided updates for their hit game Shovel Knight for the five years after its initial release. A...
Aug 28, 2019 at 7:59 PM by CharySpyro joins Crash Team Racing: Nitro-Fueled as a playable character in upcoming event
Two of the PlayStation 1's biggest icons will join together, in an upcoming event at the end of this month. On August 30th, Spyro the Dragon will playable in Crash Team Racing: Nitro-Fueled, as part...
Aug 28, 2019 at 4:16 PM by CharyTelltale Games is getting revived
Telltale Games is getting a second chance at life, according to an announcement from LCG Entertainment. The closure of the studio back in 2018 had been sudden, with the reveal of the company's revival...
Aug 28, 2019 at 3:20 PM by Chary- Aug 28, 2019 at 1:41 PM by p-sam
- Aug 28, 2019 at 12:21 AM by Chary
- Aug 27, 2019 at 4:05 PM by Chary
- Aug 27, 2019 at 10:26 AM by RattletraPM
- Aug 27, 2019 at 1:50 AM by Daisy
- Aug 27, 2019 at 1:38 AM by RattletraPM
- Aug 27, 2019 at 12:24 AM by Chary
- Aug 26, 2019 at 4:50 PM by Chary
- Aug 26, 2019 at 2:51 PM by Chary
Reviews
This emulator is pratically identical to the Neo Geo one, the MVSPSP, so I'm going to send you to that paragraph for more information, most of all for what concerns the creation of the CACHE files. As for the quality of the emulation, as for the NeoGeo, it is absolutely perfect. What are the CPS-2 Roll-Up Packs? The Roll-Up Pack List is a documentation roms that can be placed on an SD card for a complete CPS2 kit setup that's ready to use. They include every single CPS-2 game and hack. This will save people from having to scour.
- Everdrive 64 v2.5From Krikzz, the makers of what might be the most venerated flashcarts, comes the Everdrive 64.
- Mutant Year Zero: Road to EdenMutant Year Zero: Road to Eden, the post-apocalyptic game blending XCOM-styled combats and stealth with an original scenario of mankind’s doomed future, is now available on the Nintendo Switch!...
- Snakebyte 4S Wireless ControllerThird-party pads are great in theory but generally fall short against their first-party counterparts. Let's see how the wireless Snakebyte 4S stacks up during my PS4 playtests.
- Razer BlackWidow Elite Gaming KeyboardWhen I think of glowing green gaming items, it's Razer that comes to mind. At £170, does the BlackWidow Elite do enough to justify its premium price point?
- Vacation SimulatorSimulation is the latest buzzword in the world of video games, with goat, mechanic and various vehicle driving sims becoming increasingly popular in recent years. But how about a holiday from all...
Tutorials
- [Tutorial] Export Pokémon Let's Go Trainer models and Rig them with Mixamo
Posted Aug 31, 2019 at 1:38 PM by Manurocker95 - Play online on an older system version using SEN Enabler on CEX/DEX
Posted Aug 31, 2019 at 10:24 AM by Eveldee - How to get 60 FPS in Dragon Quest XI S demo
Posted Aug 23, 2019 by masagrator - How To Get New Maps For FREE On Minecraft (USB Method)
Posted Aug 23, 2019 by dominater01 - Fixing a corrupted Mii Channel in vWii (savegame manager, wuphax)
Posted Aug 22, 2019 by pasc - [Tutorial] Dump Vertex and Fragment Shader code from Nintendo Switch Games
Posted Aug 22, 2019 by Manurocker95 - [Tutorial] Loadiine Revival, howto FAT32 hard drives.
Posted Aug 19, 2019 by Drathian - Retroarch: one configuration file per emulator
Posted Aug 8, 2019 by Wiimpathy - [How-To] Stream your PC games/emulators to the Switch
Posted Aug 7, 2019 by NoNAND - How to delete notifications on 3DS
Posted Aug 5, 2019 by FEF1 - Moving EmuMMC hidden partition to a larger SD card
Posted Aug 3, 2019 by Sotriuj - [Setup-Guide]Setting up Android with EmuMMC + bigger FAT32 partition.
Posted Aug 2, 2019 by TariqSoftDev
Newest Downloads
- Risk of Rain 2 CHEAT fileUpdated: Aug 31, 2019 at 2:22 PM
- Paper Mario (U) Save fileSave file from my playthrough with the Not64 emulator.
- Super Mario Bros. Deluxe (US, GBC)Updated: Aug 24, 2019
- Pacific Wings CHEAT fileUpdated: Aug 31, 2019 at 2:22 PM
- Wario Land II (US, GBC)Updated: Aug 24, 2019
- WiiFlow Theme - TechnologxUpdated: Aug 26, 2019 at 10:48 PM
- Zone of the Enders - The Fist of Mars (US)Updated: Aug 22, 2019
- Mega Man Zero 4 (US)Updated: Aug 22, 2019
- Mega Man Zero 3 (US)Updated: Aug 22, 2019
- Mega Man Zero 2 (US)Updated: Aug 22, 2019
Forum Statistics
- Discussions:
- 474864
- Messages:
- 8171906
- Members:
- 426775
- Latest Member:
- csdarkterror
- KleinesSinchen - Sep 1, 2019 at 9:49 AM
- JaapDaniels - Sep 1, 2019 at 9:45 AM
- TornMD - Sep 1, 2019 at 9:42 AM
- FAST6191 - Sep 1, 2019 at 9:40 AM
- Sono - Sep 1, 2019 at 9:38 AM
GBAtemp Facebook Feed
In computing, a cache (/kæʃ/(listen)kash,[1] or /ˈkeɪʃ/kaysh in AuE[2]) is a hardware or software component that stores data so that future requests for that data can be served faster; the data stored in a cache might be the result of an earlier computation or a copy of data stored elsewhere. A cache hit occurs when the requested data can be found in a cache, while a cache miss occurs when it cannot. Cache hits are served by reading data from the cache, which is faster than recomputing a result or reading from a slower data store; thus, the more requests that can be served from the cache, the faster the system performs.
To be cost-effective and to enable efficient use of data, caches must be relatively small. Nevertheless, caches have proven themselves in many areas of computing, because typical computer applications access data with a high degree of locality of reference. Such access patterns exhibit temporal locality, where data is requested that has been recently requested already, and spatial locality, where data is requested that is stored physically close to data that has already been requested.
- 1Motivation
- 2Operation
- 3Examples of hardware caches
- 4Software caches
Motivation[edit]
There is an inherent trade-off between size and speed (given that a larger resource implies greater physical distances) but also a tradeoff between expensive, premium technologies (such as SRAM) vs cheaper, easily mass-produced commodities (such as DRAM or hard disks).
The buffering provided by a cache benefits both bandwidth and latency:
Latency[edit]
A larger resource incurs a significant latency for access – e.g. it can take hundreds of clock cycles for a modern 4 GHz processor to reach DRAM. This is mitigated by reading in large chunks, in the hope that subsequent reads will be from nearby locations. Prediction or explicit prefetching might also guess where future reads will come from and make requests ahead of time; if done correctly the latency is bypassed altogether.
Throughput[edit]
The use of a cache also allows for higher throughput from the underlying resource, by assembling multiple fine grain transfers into larger, more efficient requests. In the case of DRAM circuits, this might be served by having a wider data bus. For example, consider a program accessing bytes in a 32-bit address space, but being served by a 128-bit off-chip data bus; individual uncached byte accesses would allow only 1/16th of the total bandwidth to be used, and 80% of the data movement would be memory addresses instead of data itself. Reading larger chunks reduces the fraction of bandwidth required for transmitting address information.
Operation[edit]

Hardware implements cache as a block of memory for temporary storage of data likely to be used again. Central processing units (CPUs) and hard disk drives (HDDs) frequently use a cache, as do web browsers and web servers.
A cache is made up of a pool of entries. Each entry has associated data, which is a copy of the same data in some backing store. Each entry also has a tag, which specifies the identity of the data in the backing store of which the entry is a copy.
When the cache client (a CPU, web browser, operating system) needs to access data presumed to exist in the backing store, it first checks the cache. If an entry can be found with a tag matching that of the desired data, the data in the entry is used instead. This situation is known as a cache hit. For example, a web browser program might check its local cache on disk to see if it has a local copy of the contents of a web page at a particular URL. In this example, the URL is the tag, and the content of the web page is the data. The percentage of accesses that result in cache hits is known as the hit rate or hit ratio of the cache.
The alternative situation, when the cache is checked and found not to contain any entry with the desired tag, is known as a cache miss. This requires a more expensive access of data from the backing store. Once the requested data is retrieved, it is typically copied into the cache, ready for the next access.
During a cache miss, some other previously existing cache entry is removed in order to make room for the newly retrieved data. The heuristic used to select the entry to replace is known as the replacement policy. One popular replacement policy, 'least recently used' (LRU), replaces the oldest entry, the entry that was accessed less recently than any other entry (see cache algorithm). More efficient caching algorithms compute the use-hit frequency against the size of the stored contents, as well as the latencies and throughputs for both the cache and the backing store. This works well for larger amounts of data, longer latencies, and slower throughputs, such as that experienced with hard drives and networks, but is not efficient for use within a CPU cache.[citation needed]
Writing policies[edit]
When a system writes data to cache, it must at some point write that data to the backing store as well. The timing of this write is controlled by what is known as the write policy. There are two basic writing approaches:[3]
- Write-through: write is done synchronously both to the cache and to the backing store.
- Write-back (also called write-behind): initially, writing is done only to the cache. The write to the backing store is postponed until the modified content is about to be replaced by another cache block.
A write-back cache is more complex to implement, since it needs to track which of its locations have been written over, and mark them as dirty for later writing to the backing store. The data in these locations are written back to the backing store only when they are evicted from the cache, an effect referred to as a lazy write. For this reason, a read miss in a write-back cache (which requires a block to be replaced by another) will often require two memory accesses to service: one to write the replaced data from the cache back to the store, and then one to retrieve the needed data.
Other policies may also trigger data write-back. The client may make many changes to data in the cache, and then explicitly notify the cache to write back the data.
Since no data is returned to the requester on write operations, a decision needs to be made on write misses, whether or not data would be loaded into the cache.This is defined by these two approaches:
- Write allocate (also called fetch on write): data at the missed-write location is loaded to cache, followed by a write-hit operation. In this approach, write misses are similar to read misses.
- No-write allocate (also called write-no-allocate or write around): data at the missed-write location is not loaded to cache, and is written directly to the backing store. In this approach, data is loaded into the cache on read misses only.
Both write-through and write-back policies can use either of these write-miss policies, but usually they are paired in this way:[4]
- A write-back cache uses write allocate, hoping for subsequent writes (or even reads) to the same location, which is now cached.
- A write-through cache uses no-write allocate. Here, subsequent writes have no advantage, since they still need to be written directly to the backing store.
Entities other than the cache may change the data in the backing store, in which case the copy in the cache may become out-of-date or stale. Alternatively, when the client updates the data in the cache, copies of those data in other caches will become stale. Communication protocols between the cache managers which keep the data consistent are known as coherency protocols.
Examples of hardware caches[edit]
CPU cache[edit]
Small memories on or close to the CPU can operate faster than the much larger main memory. Most CPUs since the 1980s have used one or more caches, sometimes in cascaded levels; modern high-end embedded, desktop and server microprocessors may have as many as six types of cache (between levels and functions),.[5] Examples of caches with a specific function are the D-cache and I-cache and the translation lookaside buffer for the MMU.
GPU cache[edit]
Earlier graphics processing units (GPUs) often had limited read-only texture caches, and introduced morton orderswizzled textures to improve 2D cache coherency. Cache misses would drastically affect performance, e.g. if mipmapping was not used. Caching was important to leverage 32-bit (and wider) transfers for texture data that was often as little as 4 bits per pixel, indexed in complex patterns by arbitrary UV coordinates and perspective transformations in inverse texture mapping.
As GPUs advanced (especially with GPGPUcompute shaders) they have developed progressively larger and increasingly general caches, including instruction caches for shaders, exhibiting increasingly common functionality with CPU caches.[6] For example, GT200 architecture GPUs did not feature an L2 cache, while the Fermi GPU has 768 KB of last-level cache, the Kepler GPU has 1536 KB of last-level cache,[6] and the Maxwell GPU has 2048 KB of last-level cache. These caches have grown to handle synchronisation primitives between threads and atomic operations, and interface with a CPU-style MMU.
DSPs[edit]
Digital signal processors have similarly generalised over the years. Earlier designs used scratchpad memory fed by DMA, but modern DSPs such as Qualcomm Hexagon often include a very similar set of caches to a CPU (e.g. Modified Harvard architecture with shared L2, split L1 I-cache and D-cache).[7]
Translation lookaside buffer[edit]
A memory management unit (MMU) that fetches page table entries from main memory has a specialized cache, used for recording the results of virtual address to physical address translations. This specialized cache is called a translation lookaside buffer (TLB).[8]
Software caches[edit]
Disk cache[edit]
While CPU caches are generally managed entirely by hardware, a variety of software manages other caches. The page cache in main memory, which is an example of disk cache, is managed by the operating system kernel.
While the disk buffer, which is an integrated part of the hard disk drive, is sometimes misleadingly referred to as 'disk cache', its main functions are write sequencing and read prefetching. Repeated cache hits are relatively rare, due to the small size of the buffer in comparison to the drive's capacity. However, high-end disk controllers often have their own on-board cache of the hard disk drive's data blocks.
Finally, a fast local hard disk drive can also cache information held on even slower data storage devices, such as remote servers (web cache) or local tape drives or optical jukeboxes; such a scheme is the main concept of hierarchical storage management. Also, fast flash-based solid-state drives (SSDs) can be used as caches for slower rotational-media hard disk drives, working together as hybrid drives or solid-state hybrid drives (SSHDs).
Web cache[edit]
Web browsers and web proxy servers employ web caches to store previous responses from web servers, such as web pages and images. Web caches reduce the amount of information that needs to be transmitted across the network, as information previously stored in the cache can often be re-used. This reduces bandwidth and processing requirements of the web server, and helps to improve responsiveness for users of the web.[9]
Web browsers employ a built-in web cache, but some Internet service providers (ISPs) or organizations also use a caching proxy server, which is a web cache that is shared among all users of that network.
Another form of cache is P2P caching, where the files most sought for by peer-to-peer applications are stored in an ISP cache to accelerate P2P transfers. Similarly, decentralised equivalents exist, which allow communities to perform the same task for P2P traffic, for example, Corelli.[10]
Memoization[edit]
A cache can store data that is computed on demand rather than retrieved from a backing store. Memoization is an optimization technique that stores the results of resource-consuming function calls within a lookup table, allowing subsequent calls to reuse the stored results and avoid repeated computation. It is related to the dynamic programming algorithm design methodology, which can also be thought of as a means of caching.
Other caches[edit]
The BIND DNS daemon caches a mapping of domain names to IP addresses, as does a resolver library.
Write-through operation is common when operating over unreliable networks (like an Ethernet LAN), because of the enormous complexity of the coherency protocol required between multiple write-back caches when communication is unreliable. For instance, web page caches and client-sidenetwork file system caches (like those in NFS or SMB) are typically read-only or write-through specifically to keep the network protocol simple and reliable.
Search engines also frequently make web pages they have indexed available from their cache. For example, Google provides a 'Cached' link next to each search result. This can prove useful when web pages from a web server are temporarily or permanently inaccessible.
Another type of caching is storing computed results that will likely be needed again, or memoization. For example, ccache is a program that caches the output of the compilation, in order to speed up later compilation runs.
Database caching can substantially improve the throughput of database applications, for example in the processing of indexes, data dictionaries, and frequently used subsets of data.
A distributed cache[11] uses networked hosts to provide scalability, reliability and performance to the application.[12] The hosts can be co-located or spread over different geographical regions.
Buffer vs. cache[edit]
The semantics of a 'buffer' and a 'cache' are not totally different; even so, there are fundamental differences in intent between the process of caching and the process of buffering.
Fundamentally, caching realizes a performance increase for transfers of data that is being repeatedly transferred. While a caching system may realize a performance increase upon the initial (typically write) transfer of a data item, this performance increase is due to buffering occurring within the caching system.
With read caches, a data item must have been fetched from its residing location at least once in order for subsequent reads of the data item to realize a performance increase by virtue of being able to be fetched from the cache's (faster) intermediate storage rather than the data's residing location. With write caches, a performance increase of writing a data item may be realized upon the first write of the data item by virtue of the data item immediately being stored in the cache's intermediate storage, deferring the transfer of the data item to its residing storage at a later stage or else occurring as a background process. Contrary to strict buffering, a caching process must adhere to a (potentially distributed) cache coherency protocol in order to maintain consistency between the cache's intermediate storage and the location where the data resides. Buffering, on the other hand,
- reduces the number of transfers for otherwise novel data amongst communicating processes, which amortizes overhead involved for several small transfers over fewer, larger transfers,
- provides an intermediary for communicating processes which are incapable of direct transfers amongst each other, or
- ensures a minimum data size or representation required by at least one of the communicating processes involved in a transfer.
With typical caching implementations, a data item that is read or written for the first time is effectively being buffered; and in the case of a write, mostly realizing a performance increase for the application from where the write originated. Additionally, the portion of a caching protocol where individual writes are deferred to a batch of writes is a form of buffering. The portion of a caching protocol where individual reads are deferred to a batch of reads is also a form of buffering, although this form may negatively impact the performance of at least the initial reads (even though it may positively impact the performance of the sum of the individual reads). In practice, caching almost always involves some form of buffering, while strict buffering does not involve caching.
A buffer is a temporary memory location that is traditionally used because CPU instructions cannot directly address data stored in peripheral devices. Thus, addressable memory is used as an intermediate stage. Additionally, such a buffer may be feasible when a large block of data is assembled or disassembled (as required by a storage device), or when data may be delivered in a different order than that in which it is produced. Also, a whole buffer of data is usually transferred sequentially (for example to hard disk), so buffering itself sometimes increases transfer performance or reduces the variation or jitter of the transfer's latency as opposed to caching where the intent is to reduce the latency. These benefits are present even if the buffered data are written to the buffer once and read from the buffer once.
A cache also increases transfer performance. A part of the increase similarly comes from the possibility that multiple small transfers will combine into one large block. But the main performance-gain occurs because there is a good chance that the same data will be read from cache multiple times, or that written data will soon be read. A cache's sole purpose is to reduce accesses to the underlying slower storage. Cache is also usually an abstraction layer that is designed to be invisible from the perspective of neighboring layers.
See also[edit]
References[edit]
- ^'Cache'. Oxford Dictionaries. Oxford Dictionaries. Retrieved 2 August 2016.
- ^'Cache'. Macquarie Dictionary. Macmillan Publishers Group Australia 2015. Retrieved 21 July 2015.
- ^Bottomley, James (1 January 2004). 'Understanding Caching'. Linux Journal. Retrieved 7 July 2019.
- ^John L. Hennessy; David A. Patterson (16 September 2011). Computer Architecture: A Quantitative Approach. Elsevier. pp. B–12. ISBN978-0-12-383872-8. Retrieved 25 March 2012.
- ^'intel broad well core i7 with 128mb L4 cache'.Mentions L4 cache. Combined with separate I-Cache and TLB, this brings the total 'number of caches (levels+functions) to 6
- ^ abS. Mittal, 'A Survey of Techniques for Managing and Leveraging Caches in GPUs', JCSC, 23(8), 2014.
- ^'qualcom Hexagon DSP SDK overview'.
- ^Frank Uyeda (2009). 'Lecture 7: Memory Management'(PDF). CSE 120: Principles of Operating Systems. UC San Diego. Retrieved 4 December 2013.
- ^Multiple (wiki). 'Web application caching'. Docforge. Retrieved 24 July 2013.
- ^Gareth Tyson; Andreas Mauthe; Sebastian Kaune; Mu Mu; Thomas Plagemann. Corelli: A Dynamic Replication Service for Supporting Latency-Dependent Content in Community Networks(PDF). MMCN'09. Archived from the original(PDF) on 18 June 2015.
- ^Paul, S; Z Fei (1 February 2001). 'Distributed caching with centralized control'. Computer Communications. 24 (2): 256–268. CiteSeerX10.1.1.38.1094. doi:10.1016/S0140-3664(00)00322-4.
- ^Khan, Iqbal (July 2009). 'Distributed Caching On The Path To Scalability'. MSDN. 24 (7).
Further reading[edit]
- 'What Every Programmer Should Know About Memory' by Ulrich Drepper