Super Talent SSD: 16GB of Solid State Goodness

by Gary Key on 5/7/2007 4:00 AM EST
Comments Locked

44 Comments

Back to Article

  • eguy - Monday, May 14, 2007 - link

    These guys are knowledgable and sell the Super Talent and other SSDs. They are even working on a RAID0 SSD box! http://www.dvnation.com/nand-flash-ssd.html">http://www.dvnation.com/nand-flash-ssd.html .

    There are not many SSDs that can benefit from RAID0. The issue is that the CONTROLLERS used IN these disks max out in speed before the NAND chips will. That means that the Samsung NAND chips while capable of 60+MB/S are throttled by a controller than in some cases will only do 25MB/S. In a hard disk, the media transfer rate is lower than the controller's bandwith. The hard disk controller can do 150MB/S+. So in hard drive land a 50MB/S hard disk + another 50MB/S hard disk = about 100MB/S in RAID0. But I've seen a 25MB/S SSD + 25MB/S SSD =, you ready for this? 17MB/S. DV Nation is predicting they will have a RAID0 box out later this year that can outperform a single SSD. They couldn't get the ultra-fast IDE Samsungs to RAID up. I told them I wanted to do 2X SATA SSDs in RAID and they said their customers had not had success with that.
    I'm thinking newer models might in the future.

    Also don't get bent out of shape between SATA and IDE in SSDs. IDEs are just as fast, if not faster than SATA. Even in the world of hard drives, IDE vs SATA does not matter in speed. Drive makers CHOOSE to make their fastest consumer drives in SATA, but even a 10 year old IDE interface is capable of 166MB/S, right? My 10,000 RPM SATA RAPTOR can only to 75MB/S, so IDE would be just as fast for it.

    Modern SSDs will outlast hard disks. Forget the write cycles. They are rated between 1,000,000 and 5,000,000 write cycles. The problem is, hard disks are not rated in write cycles. For an apples to apples comparison, you need to use MTBF (mean time between failures). SSDs are rated much MUCH higher in that regard. Look at documentation on Sandisk's site, Samsungs, all the big manufacturers and independent reviewers. I've seen math done that shows life of up to 144 years! (!??!!)
  • Bladen - Tuesday, May 8, 2007 - link

    I think many of us would be interested in seeing exactly what RAID 0 can do for these things. It would be good to compare 2x RAID 0 of this drive vs 2x RAID 0 of the Sandisk and/or Samsung ones, and compare that to 2x RAID Raptors too.

    Just be particularly flattering to Sandisk or Samsung to get another drive of them if you can.
  • abakshi - Tuesday, May 8, 2007 - link

    If I recall, the price point for the current (OEM) SanDisk 32GB SSD is $350 in volume. If those (which are shipping in laptops today) have much better performance than this, why would anyone use this in an industrial/medical/etc. application - pay $150 more for 1/2 the space and a slower drive? Am I missing something here?

    Also, any idea of when are the SanDisk/Samsung/etc. consumer SSD's coming out?

  • PandaBear - Thursday, May 10, 2007 - link

    Yeah, longer life span if you do not read/write a lot. HD wear out regardless of use but flash usually doesn't. Also, industrial environment don't usually use a lot of storage but have a lot of packaging limitation (can't fit a large HD or don't have enough cooling) that rule out HD.

    Check out Hitachi's Endurastar HD, they are rated for industrial grade but are more expensive and smaller capactiy. Now that is a better comparison.
  • MrGarrison - Wednesday, May 9, 2007 - link

    Samsung's SSDs are already out. Check Newegg. They are even available here in Sweden.

    I would buy two of their 16GB SSDs if only they had SATA interface. Oh well, guess I'll have to wait a couple of months more.
  • Calin - Tuesday, May 8, 2007 - link

    Interesting review, but I have a small problem with it:
    Please, compare the cost per gigabyte of the 2.5" SSD drive with the cost per gigabyte of other 2.5" mechanical hard drives.
    While totally correct, the cost of $0.4/GB of current 3.5", high-capacity hard drives is much lower than the cost for the 2.5" mechanical hard drives (somewhere around the $1/GB, or slightly higher for low capacity drives).
    The 16GB 2.5" SSD don't fit in the place of a Raptor, and a Raptor won't fit in the place of a SSD 16GB drive.

    Thanks
  • bob4432 - Monday, May 7, 2007 - link

    are the power requirements for the seagate 7200.2 correct - .87W / 2.42W?
  • MadBoris - Monday, May 7, 2007 - link

    I'm just very disappointed with performance on these for consumer PC usage.
    I mean this is solid state memory.
    Somebody is going to break this wide open with performance someday, because flash is just so damn slow it's painful to write this.

    Making a RAMDRIVE today (using a portion of system RAM) on our PC's is thousands fold faster only lacking volatility for persistent data.

    Just duct tape some RAM sticks together on a PCB, hook a duracell to it and we should be good. ;) Well, you get the idea...We need to leverage performance of RAM today.

    Wake me up when this technology gets interesting.
  • Shadar - Monday, May 7, 2007 - link

    The article seems to imply that transfer rates are the problem with performance. In this case a RAID of 2 or 4 of these in RAID-0 would drastically increase performance. 4 of these in a Raid 0 should crush a standard hard drive as the transfer rate would always be higher and it would have blazing access times.

    Though I must wonder why the CF cards are not raided as it is inside this drive. Why wouldn't the manufacturer be using 4 4GB cards in a raid array to boost the speeds themselves inside the box?
  • yyrkoon - Monday, May 7, 2007 - link

    quote:

    The article seems to imply that transfer rates are the problem with performance. In this case a RAID of 2 or 4 of these in RAID-0 would drastically increase performance. 4 of these in a Raid 0 should crush a standard hard drive as the transfer rate would always be higher and it would have blazing access times.


    Yeah sure, lets take something with an already severely limited lifespan, and decrease the lifespan by abusing it with RAID . . . Lets not forget that 4 of these drives would set you back over $2000, and it makes even less sense to do so.

    I have done intesive testing in this area of my own, and to tell you the truth, *you* do not need that type of performance. *you*, of course meaning, you, me, or the next guy. Besides all this, if you really want to waste you money in the name of peformance, why dont you get 4x or more servers, capable of supporting 32 GB of memory each, use iSCSI, export 31GB of ram from each server, and RAID across those. If you're worried about redundancy, toss in a couple of Raptors into the initiator, and run RAID 0+1, or RAID 10 for redundancy . . .
  • Shadar - Monday, May 7, 2007 - link

    Your post wreaks of arrogance, assuming that everyone uses a computer just as you do.


    For heavy gamers who also want to encode files there is no perfect solution currently. If you put a 4 disk SSD raid array together it would likely be faster than regular hard drives in its transfer rate and its seek times are faster too. Thus its faster for games and faster for encoding files.

    Sure, it's 2000 bucks today... but within 6 months I guarantee you will be able to get 4 ssd's for 1000 or less. Maybe not 16GB each but 4 8gb disks is plenty.

    Plus some people don't care about cost, they care about speed. If you care about cost you arn't buying even 1 of these. These are meant for the power user... and a power user would raid these things if it drastically increased performance. We don't know if it does though because there are no tests of it.
  • fc1204 - Monday, May 7, 2007 - link

    Um... there are RAID 0/1 SSD solutions out there. People that review these SSD's should open them up and check what's on the board.

    Really, you need to know what type of flash and the controller(s) are used in order to understand the drive. It could be using MLC flash that is used in consumer USB pen drives or SD cards. It's cheaper than the SLC, but carries a 5K or 10K write/erase cycle limit per block. SLC is up to 100K.

    Still, 100K*16GB gets you about 2 years with this drive if you write 25MB/s straight for 2 years. Wearing out is not a problem that HDD can avoid. The mechanical parts, especially the spindle, of your HDD has a life span. You probably don't write 25MB/s for 60*60*24*365=3,153,600 s/year. If you did, I think your drive would probably not last as long as you think it would. I am sorry, people in the embedded systems market spend the money on flash SSDs because the data is safer than HDDs. Less moving parts vs. no moving parts.

    There are also companies that make SD/CF RAID solutions. Let's not get upset because this is a embedded systems solution that is being shifted into the consumer market. We should try to really understand what is being done rather than shoot off speculations.
  • PandaBear - Thursday, May 10, 2007 - link

    Totally agree. In some cases the environment cannot use mechanical HD because of the temperature or altitude, or high shock. There is no choice but to use flash.

    For consumer, the main advantage is power saving, heat, and noise. So there is no advantage for desktop yet, but for ultra portable laptop it is good. If you want performance, you have to pay, and you probably won't be using a large one because you will be optimizing your application (i.e. a database server with 8GB of data with mainly read cycles, and has to be fast) with lots of ram and dedicated processors to begin with. It targets people that uses laptop in remote location that battery life and portability means everything, but they don't waste their battery playing solitary or mp3s, but take survey with equipments, mobile registration offices for emergency response, word processing on a 12 hr flight, military/police setting up check points, etc. They would rather buy more expensive laptops than hauling a diesel generator around.

    Just like porn, if you don't get it, it is not for you.
  • Traciatim - Monday, May 7, 2007 - link

    Why were there no Web server or Database benchmarks to show off where SSDs really shine?
  • dm0r - Monday, May 7, 2007 - link

    Obviously the first SSD will be more weak in performance agains traditionals Hard Drives.SSD will be improved a lot because its a very recent tecnology, but this drives are a exelent choice for laptops and UMPC's because of its low power consuption, generates low heat and makes no noise, thanks to literally abandon mechanics.

    Good review Gary!
  • yyrkoon - Monday, May 7, 2007 - link

    Actually, these are not the first SSD drives, and some of the first were actually much faster.

    SSD has been around a lot longer than people think, this are just 'consumer greade', in that they support consumer grade interfaces. Besides all that, there are people such as myself, who do not even consider NAND drives SSD to begin with. In our world, SSD uses static ram, that is much faster, and capable of handling much faster transfers, and do not suffer from this read / write cycle MTBF issue (per se). These types of SSD's however, would not retain any data after the power is turned off, and would require a battery (or some form of electrical current) to do so. So, in this one respect, they are inferior, but superior in most other aspects.
  • Olaf van der Spek - Monday, May 7, 2007 - link

    quote:

    Super Talent has developed a set of proprietary wear leveling algorithms along with built in EDD/EDC functions to ensure excellent data integrity over the course of the drive's lifespan.

    What's EDD?
  • tirouspsss - Monday, May 7, 2007 - link

    the article doesnt surprise me in terms of performance, dunno why but for some reason i had this inclination that ssds werent going to be all that (at least for now).. & the 100K write/read cycle has always bothered me - i just dont trust it.

    For JW:

    "Besides, with the rate of progress it's likely that in the future SSDs will get replaced every couple of years just like today's HDDs."

    what do u mean BESIDES??? this ISNT a good thing. werent u saying the ssd is good for 10yrs etc? so y should they get replaced so quick then? plus its bad for the environment, is it not?
  • Chriz - Monday, May 7, 2007 - link

    I think Jared meant that for consumers using SSDs, they would still replace them every couple years just like HDD's because newer ones would be larger and better performance.
  • JarredWalton - Monday, May 7, 2007 - link

    Yup. I worked at a large corporation where we had a million dollar RAID setup for the main servers. Some huge box with 72 15K SCSI drives in it. After about four years, every old drive in there (which was running fine) was yanked out and replaced. Why? Because the new drives were faster, even with RAID 5 + hot spare there was concern that multiple drive failures would results in a loss of data, and for a location that generates something like several million dollars worth of product movement every day they couldn't risk any loss of data. So they upgraded all the old drives to new drives just to be safe, and the new drives were also a bit faster. For that type of market, the replacement costs of hardware are nothing compared to the potential for lost revenue.
  • Hulk - Monday, May 7, 2007 - link

    Since Flash memory is so cheap how come some manufacturer can't make a hard drive unit where you can plug in identical memory cards? You can get 4GB modules for less than $40 these days. 8x40=$320 for 32GB. Using a Raid type parallel access scheme you should be able to get 8 times the performance of one module. So if one module can write at 6MB/sec then 6x8=48MB/sec.

    Plus if a module starts to fail you could replace it.

    These are just questions from someone that only has a basic understanding of this technology of course. If it could work I'm sure someone would be doing it. I'm curious as to the specifics of why this idea would not be feasible.
  • PandaBear - Thursday, May 10, 2007 - link

    Because the cheap nand doesn't last 100k (MLC) and they are slow. Example:

    Sandisk CF cost around $10/gb and is around 10MB/S if trimmed to high performance (Ultra II), and 20MB/S if running parallel internally (Extreme III)

    The same CF capacity will cost 1.5x to make it 40MB/S in parallel but gives you very high reliability (250k to 1M write/erase).


    So there you have it, for HD you better play it save and use expensive nand, and it won't cost $10/GB
  • miahallen - Wednesday, May 9, 2007 - link

    One of these:
    http://www.newegg.com/Product/Product.aspx?Item=N8...">http://www.newegg.com/Product/Product.aspx?Item=N8...

    Four of these:
    http://www.newegg.com/Product/Product.aspx?Item=N8...">http://www.newegg.com/Product/Product.aspx?Item=N8...

    And, four of these:
    http://www.newegg.com/Product/Product.aspx?Item=N8...">http://www.newegg.com/Product/Product.aspx?Item=N8...

    $340 total for 32GB - In a RAID0 that would be rated speed of 80MB/s read, 72MB/s write, and still great random access speeds.
  • Ajax9000 - Monday, May 7, 2007 - link

    There are CF2IDE and CF2SATA adapters (e.g. see this list http://www.addonics.com/products/flash_memory_read...">http://www.addonics.com/products/flash_memory_read... ).

    For about the same price as the SuperTalent 16GB SSD you could build a 32GB IDE SSD using two 16GB CF cards and a dual slot CF2IDE adapter.
    BTW, DansData looked at this sort of thing back in 2000 (http://www.dansdata.com/cfide.htm">http://www.dansdata.com/cfide.htm ) and earlier this year (http://www.dansdata.com/flashswap.htm">http://www.dansdata.com/flashswap.htm ) but didn't go into performance details.

    I think it would be very interesting if Anandtechs' upcoming review of consumer oriented SSD products also looked at CF2IDE and CF2SATA adapters as an interim solution untill "proper" SSDs get somewhat cheaper.

    Are there issues with this? Of course, but they may be reasonable tradeoffs.

    IDE vs SATA
    The SuperTalent review notes that SSDs tend not to be interface bound at the moment so there may not be much difference between SATA and IDE for SSDs. Also, as CF uses an IDE interface (and I understand that the CF2IDE adapters are little more than physical connects) using a CF2IDE adaptor shouldn't impact on performance either ... as long as the I/O controller in the CF card is good (and there are 133x and 150x CF cards in 12 & 16 gig)

    Wearout
    Reflex's comments are a fairly typical concern with respect to the use of flash memory in consumer PCs. And if there was no wear-levelling or ECC on consumer CF cards they simply couldn't be used for swap files etc. BUT someone has commented on DailyTech that that flash cards commonly have memory controllers which do wear levelling and/or ECC (http://www.dailytech.com/article.aspx?newsid=7135&...">http://www.dailytech.com/article.aspx?n...&com... ). Even so, it would seem dangerous to have the OS and swap on the same card.

    The thing I like about the double CF2IDE adaptor (and what I'd like to see someone such as AnadTech test out :-) is the possibility of having swap on a smaller/cheaper card (say 4GB?), so NAND wearout of the swap can be contained to a more affordably replaced item.

    In summary compare the price and performance of a dual-CF2IDE adapter + 12/16GB CF (OS+apps) + 4/8GB CF (swap) against a 16/32GB SSD.

    Adrian
  • Reflex - Tuesday, May 8, 2007 - link

    Just to make something clear: Wear leveling is not a magic pancea. The ratings they give are taking wear leveling into account. It is not "100K writes + wear leveling to stretch it further" its "100k writes due to our wear leveling technique". Even without a swap file, for a typical workstation you would use that up fairly rapidly. I am going to go out on a limb though and guess that they probably have more like 250-500k writes, but are only guaranteeing it for 100k to protect themselves. For the market these are designed for, 100k writes is more than the machines will likely use in their service lifetime. For a desktop PC, however, it would wear out very very quickly as I have stated above.
  • Ajax9000 - Tuesday, May 8, 2007 - link

    Thanks Reflex.

    I'm still curious re the performance comparison, as well as the TCO/longevity issue. :-)
  • yyrkoon - Monday, May 7, 2007 - link

    Just a guess, but I think it would be a nightmare desiging a controller that could address mutliple 'Flash Drives'. Lets take your typical SD card for example, whatever it plugs into has one controller for the card, and if what you're saying were to happen, you would need multiple controllers, all talking to a main controller, which then in turn would communicate with the actual HDD controller. This would be slow, and problematic, especially when data spanned one or more memory media device. I am not saying it could not be done, and may even possibly done well, but there are other factor such as liscencing fees, and controller costs, etc.

    As an example, do you have any idea what it takes to get your hands on a legitimit copy of the SATA specification ? Last I looked, its ~$500 for the design specifications 'book', and every device you make that uses the technology requires a liscencing fee. In other words, it is not cheap, I would imagine the same applies for SD controllers (or whatever form of media said OEM would choose/support), and one normally goes into business to make money, and this would likely eat deeply into the pockets of the share holders.

    I can think of more reasons, and the ones given may not be entirerly accurate, but this should give you some idea as to 'why'.
  • JoshuaBuss - Monday, May 7, 2007 - link

    I would love to know the exact same thing. You can buy 4gb SD cards for $40.. 2gb for $20 if you shop around. Flash memory seems to be practically given away these days.. it's so friggin cheap.
  • Lonyo - Monday, May 7, 2007 - link

    I think they are doing it. IIRC there was something posted on DailyTech about a card which used regular memory cards and hooked up to a SATA/PATA interface. I think anyway, not 100% sure.
  • yacoub - Monday, May 7, 2007 - link

    well i guess they gotta start somewhere :D
  • Samus - Monday, May 7, 2007 - link

    Simply awesome, thanks for the review Gary. This is exciting technology for sure. Only took them 20 years to make it cost effective and reasonably good storage.
  • redbone75 - Monday, May 7, 2007 - link

    I would say SSD's have a few more years to go before they become cost effective, in the home consumer market, anyway. That market will be very small until the price/GB becomes more reasonable.
  • Lonyo - Monday, May 7, 2007 - link

    Is there any chance for comparison of some 1.8" drives in the future?
    Since 1.8" mechanical drives are somewhat slower than 2.5 or 3.5" mechanical drives, and 1.8" laptops are looking at things like low power consumption, it would be nice to see, assuming you can get hold of some 1.8" drives of both types.
  • Reflex - Monday, May 7, 2007 - link

    These drives are great in an embedded or manufacturing environment. Typically they are not written to frequently so you will never hit the write limitations. As a desktop PC drive however that write limitation could be hit very quickly, within a year even. Furthermore, having worked with these drives extensively in embedded environments, I will point out that when the write limitation is hit, you can no longer read the device either. Since there is no real warning, you simply suddenly lose access to all data on that drive.

    Solid state storage is the future, but not in the form of today's flash. The write limitation is severe, and very problematic. There are competing technologies that hopefully will show up sooner rather than later.
  • falc0ne - Monday, May 7, 2007 - link

    "The SSD16GB25/25M features a read seek time of less than 1ms, a maximum read/write speed of up to 28 MB/sec, a sustained transfer rate of 25 MB/sec, and an estimated write/erase cycle of approximately 100,000 cycles. This equates into a 1,000,000 hour MTBF rating and indicates a 10 year life expectancy based upon normal usage patterns. Super Talent has developed a set of proprietary wear leveling algorithms along with built in EDD/EDC functions to ensure excellent data integrity over the course of the drive's lifespan."
    This passage tells a completely different story..
  • mongo lloyd - Monday, May 7, 2007 - link

    Dan at Dansdata.com has said the exact same things as Reflex here for quite a while, and I tend to believe him more than SuperTalent's PR department.

    Also, as Reflex points out, NAND flash has usually way more than 100,000 write/erase cycles. 1 million cycle is not too uncommon.

    Regular CompactFlash memory (previously NOR flash, nowadays NAND flash) can take up to the same order of magnitude of write/erase cycles, and we all know memory cards for digital cameras have quite a finite life. And that's without putting a paging file on them.
  • PandaBear - Thursday, May 10, 2007 - link

    It depends on what kind of Nand. MLC usually can barely hit 100k for good ones (i.e. Toshiba and SanDisk) while 5k for bad ones (i.e. some batch of Samsung that got rejected and they have to dump in the spot market).

    For a camera, you will have to wear out your camera's shutter before you can wear out the card, but for HD, you better have very good wear leveling and good nand before even attempting).
  • Gary Key - Monday, May 7, 2007 - link

    The manufacturer's are taking a conservative path with the write/erase cycles per sector and it has been difficult to nail them down on it. The latest information I have from SanDisk as an example is that the non-recoverable error rate is 1 error per 10 to the 20th bits read on their current drives but they have not committed to active duty cycles or power-on hours in arriving at that calculation. The majority of the SSD suppliers are focused on MTBF ratings at this time. We will have further details in our consumer article as I expect Samsung to open up on the subject.
  • PandaBear - Thursday, May 10, 2007 - link

    Nand don't wear out by sitting around, they wear out by erase/program permanently or read disturb (recoverable just by a rewrite). So MTBF is meaningless. You have to do a lot of reading continuously in order to wear out by read. Actually there are algorithms that protect such cases already by refreshing it, so no harm is done.

    It is the write that really kills the sector, and Samsung did not mention its erase/program for a reason: they failed their own spec that many reputable clients rejected their order (i.e Sandisk rejected their order from Samsung MLC, and Apple uses excessive recovery algorithm to tolerate them on the audio playback, those Taiwanese cheap flash that you get for free with super slow performance or die after 2 weeks, well, you know what you will get when you open up the case).

    For their SSD, they may use SLC instead for the performance and reliability reason. It costs 20% more in spot market, but manufacturing cost is much higher (almost 2x when you think about it), so it will cost more.

  • Reflex - Monday, May 7, 2007 - link

    First off, 100,000 is a VERY VERY low write rating for flash, typical drives nowadays have 250k+ write cycles.

    Secondly, as pointed out by the article, the intended market is industrial and embedded, which as I stated originally, is an environment where the drives are rarely written to. Typically you have a bootable image in those environments, and it is write protected in some fashion, or requires a very small number of writes.

    And finally, if you think 100k write cycles is a lot, watch the drive light on the front of your PC someday. Every flash is a minimum of one write or read operation. Calculate how many times that flashes in ten minutes of 'typical' use. Then extrapolate. You'll understand what I mean.
  • JarredWalton - Monday, May 7, 2007 - link

    The 100,000 writes is per sector (or whatever the flash block sizes are) of the drive, so even if you're generating thousands of writes per day if the writes are all going to different blocks it becomes much less of an issue. That's what the "proprietary wear leveling algorithms along with built in EDD/EDC functions to ensure excellent data integrity over the course of the drive's lifespan" are supposed to address.

    Unless you are intentionally rewriting a single location repeatedly, I don't doubt that the drives can last 10 years. Considering I have a lot of normal hard drives fail within five years, that's not too bad. Besides, with the rate of progress it's likely that in the future SSDs will get replaced every couple of years just like today's HDDs.
  • PandaBear - Thursday, May 10, 2007 - link

    With wear leveling, it doesn't matter where you write, it is internally mapped to different physical location each time, so it is 100k write per sector x # of sectors = total # of write you can get out of the entire drive.

    In this case, a bigger drive buy you more than just space, it buys you extra blocks/sectors that it can cycle through and reduce the wear on every single drives.
  • Reflex - Monday, May 7, 2007 - link

    [quote]That's what the "proprietary wear leveling algorithms along with built in EDD/EDC functions to ensure excellent data integrity over the course of the drive's lifespan" are supposed to address.[/quote]
    Just to address this specifically, there is no such thing as a 'standard wear leveling algorithm', every flash producer has thier own method of wear leveling, so by default they are all proprietary. I am relatively certain that this company has not come up with something so revolutionary that it would essentially change the entire market as you seem to be implying, if they have I am pretty certain these flash chips would be the industry standard by now. Furthermore, were it any more advanced than the competition, it would not be advertised with a 100k write limitation when the industry standard is 250k writes.
  • Reflex - Monday, May 7, 2007 - link

    I am very aware of how it works. However write operations can happen across several sectors. Once again, consider the market these are intended for. You will NOT get ten years out of one on a typical workstation, it simply will not happen. You will get at least a decade out of one as part of a cash register, assembly line robot, or other industrial/embedded purpose, which is what their statement is all about.

    You are likely to get one to two years out of one of these, tops. Furthermore, when it fails it will be sudden, and you will not be able to recover your data through conventional means.

    I highly suggest you test this before you reccomend your readers to use these things as a main drive. I have tested it extensively myself as part of my job. My email is in my profile if you feel the urge to contact me about this.

Log in

Don't have an account? Sign up now