Home / Component / APU / TSMC begins to risk-produce 16FF+ chips for Nvidia, MediaTek, LG, Xilinx, others

TSMC begins to risk-produce 16FF+ chips for Nvidia, MediaTek, LG, Xilinx, others

Taiwan Semiconductor Manufacturing Co. on Wednesday said that it has begun producing chips using its 16nm FinFET+ (16FF+) manufacturing technology. The new process technology will be used by a number of TSMC’s partners to make their leading edge chips due next year. Among the first companies to adopt the improved 16nm FinFET process technology from TSMC will be Nvidia, MediaTek, LG Electronics and others.

If TSMC starts risk production of chips using 16nm FinFET+ process technology now, expect commercial products made using 16FF+ fabrication process to arrive in the late third quarter of 2015 at the earliest. TSMC officially anticipates that the 16FF+ volume ramp will begin around July in 2015.

TSMC’s 16nm FinFET (CLN16FF) and 16nm FinFET+ (CLN16FF+) process technologies rely on the back-end-of-line (BEOL) interconnect flow of the company’s 20nm SOC (CLN20SOC) fabrication process, but use FinFET transistors instead of planar transistors. Such hybrid approach to CLN16FF process technologies provides additional performance and/or power savings, but does not allow to significantly shrink the size of chips compared to chips made using the 20nm SOC technology. The proven BEOL interconnect flow means that it gets easier for TSMC to start mass production of chips using its 16FF and 16FF+ manufacturing technologies.

“Our successful ramp-up in 20SoC has blazed a trail for 16FF and 16FF+, allowing us to rapidly offer a highly competitive technology to achieve maximum value for customers’ products,” said Mark Liu, the president and co-CEO of TSMC. “We believe this new process can provide our customers the right balance between performance and cost so they can best meet their design requirements and time-to-market goals.”

According to TSMC, 16nm FinFET+ provides up to 15 per cent performance improvement over the 16nm FinFET at the same level of power consumption. At the same clock-rate, chips produced using 16nm FinFET+ are expected to consume 30 per cent less power compared to the same chips made using 16nm FinFET. Products manufactured using 16nm FinFET+ will offer up to 40 per cent speed improvement over chips made using 20nm technology, or will consume 50 per cent lower amount of power at the same clock-rate.

tsmc_semiconductor_fab12_3

The 16FF+ process is on track to pass full reliability qualification later in November, and nearly 60 customer designs are currently scheduled to tape out by the end of 2015. Among the early adopters of TSMC’s 16nm FinFET+ fabrication processes are Avago, Freescale, Nvidia, MediaTek, LG Electronics, Renesas, Xilinx,

“Nvidia and TSMC have collaborated for more than 15 years to deliver complex GPU architectures on state-of-the-art process nodes,” said Jeff Fisher, senior vice president of GeForce business unit at Nvidia. “Our partnership has delivered well over a billion GPUs that are deployed in everything from automobiles to supercomputers. Through working together on the next-generation 16nm FinFET process, we look forward to delivering industry-leading performance and power efficiency with future GPUs and SoCs.”

Discuss on our Facebook page, HERE.

KitGuru Says: Keeping in mind that many of 16FF+ adopters are unlikely to make chips using 16FF process technology, it looks like a number of companies, including Nvidia, MediaTek and LG will unlikely offer brand-ne chips made using a next-generation process technology before fall 2015. It is noteworthy that AMD is not among the early adopters of TSMC's 16FF+ process technology.

Become a Patron!

Check Also

Intel’s x86S initiative has been abandoned

Intel has officially abandoned its plans for its own-developed x86S specification, a streamlined version of …

4 comments

  1. Kristijan Vragović

    It is true. AMD will not adopt 16 nm FinFet until its mature, but when you consider a new chip with 4096 gcn cores on a 28 nm… Somehow i dont think so…. Simply because in that case AMD will also need to shave around 100 watts down from chip consumption. And even then gpu will be more power hungry than a gtx 980.
    Maybe if Fiji XT, or Bermuda can fit in a Hawaii XT TDP envelop than i guess they will risk it. But because AMD, or better former ATI engineers are really competitive, and many speculation about 20nm maybe there is something in that chip. I’m really interested in R9 380x chips since i’m planning upgrading next year. Market needs some “bunny from the hat” like product out from AMD baddly…

  2. It’s not impossible. I think it’ll be 3840sp myself, but 4096 is certainly possible (and maybe even likely given it could get larger yields on 56-60 active CUs). My guess is 60CUs could probably be done within 23mm2 on 28nm, 64 probably slightly over that. You’ve gotta figure a larger chip is more efficient, plus the move to HBM could shave down around ~10-15w. We’re also probably talking very targeted operating voltage that probably won’t have much headroom. For instance, 7970 was 1.175v/925mhz. This may be closer to ~1.1v, if that.

    What you end up with would be a feat, but not pretty, especially if the goal is 1ghz using a similar architecture. It’s conceivably possible by throwing in every trick in the book to make it happen, but it would probably be damn close to 300w at stock and perhaps require some sort of crazy stock water cooling or something…

    It gets *slightly* less crazy if they move to 20nm. Conceivably the chip size could drop by almost half, and clock speeds could go up 20% or so and power down to slightly more manageable levels. This would be preferable not only for those reasons, but also because the design would then have the room for the flops to use the 512gbps available from 4 stacks of 128gbps HBM…it would be an efficient design in terms of amd’s arch, though then the problem then becomes you’ve got relatively small chip dispensing a disproportionate amount of heat…and probably needing a crazy stock water cooler or something.

    Either *seems* possible, but…

    When you figure big maxwell is 24 SMM at 1100 base and 1390 boost, and 24SMM should be fairly similar to 3840sp, that’s a problem. On 28nm, AMD’s would be SOL probably even overclocked compared to the base clock (if not very similar). On 20nm it’s a slightly more fair fight…maybe…depending on a lot of unknowns (how does 20nm/HBM [over]clock, especially per watt, what typical constant speed can big maxwell maintain, how much does each part cost relatively, etc).

    TLDR: You’re pretty much right with your last sentence. AMD could conceivably be competitive if they do something daring/unexpected (jump to 20nm or have some unknown arch improvement to match how well maxwell is able clock compared to past gpu archs from amd/nvidia), but based on the products they’ve put out the last couple years…betting on red is a gamble. The saving grace is that given nvidia’s/amd’s past pricing strategies, AMD may price themselves better (read: consumer friendly) relative to their performance deficit. If R390x is $500-600 and ‘Titan II’ is disproportionately closer to $1000, many may not care about the relative performance difference, especially if they don’t need the 6/12GB nvidia will provide (compared likely to amd’s 4GB).

  3. Kristijan Vragović

    I agree with everything you said. You gave a really large explanation and i apprieciate it. 🙂
    But i just don’t think that they will release product with so high tdp.
    SInce R9 300 series is GCN 1.2 and tonga is exactly that we could see around 40% better efficiency looking at R9 285 versus 280 with same core count. So, like you said, bigger chip is more efficient, and maybe they do some more miracle with it, along with HBM memory could give a nice performance and consumption. But, GCN cores are not known as efficient ones, and AMD need to improve that a lot. Similar thing they did when they launched a 6000 series… One other thing that AMD also trubles is performance al lower resolutions compared to a gtx 9×0… At 4K AMD gives some competition but at 1080p, and to be honest there is a lot people that have such monitors 970 is clear winner… 980 at 1440p.
    Calculate in 40% more efficiency, maybe a 20nm process, and same or slightly lower TDP and we could have a new monster from AMD. And since AMD politics are 270x for 1080p, 280x for 1440p, 290x 4K, same will happen this time with 370x, 380x, and 390x.
    Maybe consumption will not be so lower like on Nvidias cards, but performance… I think they will compete with big maxwell no problem…

  4. Kristijan Vragović

    Watercooling that rumored to go along with R9 390x… Maybe is a way to put a better cooler on their cards preventing 95 degrees celsius from happening again