There are some variables in the API that can be set at content database level; the default is 64320Kb for the maximum size of the shred. If the file is less than the maximum size set, then it simply won’t shred the file at all. More details are available in Bill’s post. Existing Data A key issue to point out is that if you upgrade your existing SharePoint 2010 Content Databases to SharePoint 2013, they will not benefit from Shredded Storage until a new document version is created. Turning Off Shredded Storage Shredded Storage can be turned off for a web application, site collection, and site (web) level – the default setting is AlwaysDirectToShredded. If you turn off Shredded Storage, SharePoint goes back to acting like it did in SharePoint 2010…Cobalt v1 style. This means that you have potentially higher file i/o on between the WFE and no storage savings on deltas of versioned files. What happens when you enable RBS? When you turn on RBS with a content database that has Shredded Storage enabled, the real-time RBS provider receives each shredded BLOB individually. These shreds are extremely small and as our RBS research in 2010 proved with our white paper, storing BLOBs outside of the SQL database that are less than 1Mb is, in general, inefficient. This is why we recommend setting up RBS rules that leave files less than 1Mb in the content database. Our scheduled RBS product (DocAve Storage Manager) will work fine with Shredded Storage, as when Storage Manager calls SharePoint to externalize it we do get the full BLOB. We can also do more sophisticated business rules to decide whether we externalize it with RBS also. By adding the RBS Provider into the mix, when I’m fetching the 69th version of a document, it’s going to get REAL chatty with the RBS provider fetching all the individual shreds. The shred size can potentially be changed up to 1Mb to be more efficient from an RBS perspective, but until we get more data from our labs we have no concrete guidance here yet. Some preliminary performance stats are available below. Fetch Performance From a performance perspective, for instance, if I save a 10Mb document 100 times and store each version – changing randomly a few paragraphs all over the document – to fetch version #69 or even the latest version it must merge all of the relevant shreds and do so all in the SQL software layer. This concerns me A LOT as it will be a huge performance overhead to do this over simply fetching the entire BLOB version like in SharePoint 2010! The table below illustrates the time it took to perform a full SP-Export on the entire site collection, based on the different configurations with exactly the same content data set:| Shredded Storage | DB size (Mb) | RBS size (Gb) | Export time (secs) |
| Off | 24724.88 | Off | 1477 |
| Off | 54.58 | 23.40 | 1882 |
| Default – 64Kb chunk | 6000.31 | Off | 2471 |
| Default – 64Kb chunk | 103.25 | 6.35 | 3502 |
| 1Mb chunk | 6749.30 | Off | 2005 |
| 1Mb chunk | 95.19 | 6.25 | 3309 |
| 1Gb chunk | 13349.81 | Off | 1745 |
| 1Gb chunk | 74.00 | 12.40 | 2096 |

