Ex Parte Bruls et alDownload PDFPatent Trial and Appeal BoardNov 28, 201814397404 (P.T.A.B. Nov. 28, 2018) Copy Citation UNITED STA TES p A TENT AND TRADEMARK OFFICE APPLICATION NO. FILING DATE FIRST NAMED INVENTOR 14/397,404 10/27/2014 Wilhelmus Hendrikus Alfonsus Bruis 24737 7590 11/30/2018 PHILIPS INTELLECTUAL PROPERTY & STANDARDS 465 Columbus A venue Suite 340 Valhalla, NY 10595 UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www .uspto.gov ATTORNEY DOCKET NO. CONFIRMATION NO. 2011P02143WOUS 9007 EXAMINER NOH,JAENAM ART UNIT PAPER NUMBER 2481 NOTIFICATION DATE DELIVERY MODE 11/30/2018 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address(es): patti. demichele@Philips.com marianne.fox@philips.com katelyn.mulroy@philips.com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD Ex parte WILHELMUS HENDRIKUS ALFONSUS BRULS and BARTOLOMEUS WILHELMUS DAMIANUS SONNEVELDT 1 Appeal2018-001940 Application 14/397 ,404 Technology Center 2400 Before CARLA M. KRIVAK, HUNG H. BUI, and JON M. JURGOV AN, Administrative Patent Judges. KRIVAK, Administrative Patent Judge. DECISION ON APPEAL Appellants appeal under 35 U.S.C. § 134(a) from the Examiner's Final Rejection of claims 1 and 3-21, which are all the claims pending in the application. We have jurisdiction under 35 U.S.C. § 6(b ). We affirm-in-part. 1 Appellants identify the real party in interest as Koninklijke Philips N.V. (see App. Br. 1 ). Appeal2018-001940 Application 14/397 ,404 STATEMENT OF THE CASE Appellants' invention is directed to a method and device for "generating and/or adapting views based on the 3D video signal for a respective 3D display" using "a parameter for targeting ... [the] 3D video signal to ... [the] respective 3D display based on a quality metric" and "optimizing the perceived 3D image quality of ... [the] respective 3D display" (Spec. 1 :7-8, 27-29). Claims 1, 13, and 15 are independent. Independent claim 1, reproduced below, is exemplary of the subject matter on appeal. 1. A device for processing a three dimensional video signal, the three dimensional video signal comprising three dimensional image data to be displayed on a three dimensional display, wherein the three dimensional display requires multiple views for creating a three dimensional effect for a viewer, the device comprising: a receiver for receiving the three dimensional video signal, and a processor configured to: determine at least a first processed view based on the three dimensional image data adapted by a parameter for targeting the multiple views to the three dimensional display, calculate a quality metric indicative of perceived three dimensional image quality, wherein the quality metric is based on a combination of image values of the first processed view and a further view, wherein the further view is one of: ( 1) a second processed view based on the three dimensional image data adapted by the parameter, (2) a two dimensional view available in the three dimensional image data, and (3) a third processed view based on the three dimensional image data adapted by the parameter and the processed view, and 2 Appeal2018-001940 Application 14/397 ,404 determine a preferred value for the parameter based on performing the determining and the calculating for multiple values of the parameter. REJECTIONS and REFERENCES The Examiner rejected claims 1, 3, 5, 9--16, 18, and 19 under 35 U.S.C. § I02(e) as anticipated by Izzat (US 2012/0249750 Al; published Oct. 4, 2012). The Examiner rejected claims 4, 6-8, 17, 20, and 21 under 35 U.S.C. § I03(a) based upon the teachings ofizzat and Chen (US 2014/0044348 Al; published Feb. 13, 2014). ANALYSIS Rejection of claims 1, 3, 5, 9--16, 18, and 19 under§ 102(e) Claims 1, 3, 9--16, 18, and 19 Appellants contend the Examiner erred in finding Izzat discloses all the limitations of independent claim 1 (App. Br. 5-12). Specifically, Appellants contend Izzat does not disclose "a parameter," "a first processed view [ determined] based on the three dimensional image data adapted by a parameter," calculating "a quality metric indicative of perceived three dimensional image quality" based on "a combination of image values of the first processed view and a further view," and "determining and calculating operations for multiple values of a parameter for targeting [] multiple views to a three dimensional display to determine a preferred value of the parameter" (App. Br. 6-8, 11; Reply Br. 2---6). We do not agree. We agree with and adopt the Examiner's findings as our own (Final Act. 7-9; Ans. 17-24). Initially, we note that a cited reference need not recite the claim language ipsissimis verbis (see Kennametal, Inc. v. Ingersoll 3 Appeal2018-001940 Application 14/397 ,404 Cutting Tool Co., 780 F.3d 1376, 1381 (Fed. Cir. 2015) ("a reference can anticipate a claim even if it 'd[ oes] not expressly spell out' all the limitations arranged or combined as in the claim, if a person of skill in the art, reading the reference, would 'at once envisage' the claimed arrangement or combination." quoting In re Petering, 301 F.2d 676,681 (CCPA 1962)). The Examiner's findings persuade us that Izzat discloses the contested claim limitations. That is, the broad language of claim 1 allows a reading of Appellants' claim on Izzat. For example, Appellants' claimed "parameter" (that adapts 3D image data "for targeting the multiple views to the three dimensional display") is commensurate with Izzat's disparity adjustment which modifies a disparity range (maximum/minimum disparity) in a stereo- image pair (Ans. 17, 21; see Izzat ,r,r 21, 42, 64, 75-76; Spec. 2:22-23, 7:14--15, 9:6-8). 2 Izzat's disparity adjustment (parameter) targets multiple views (Izzat's stereo pair of images) to a three dimensional display, as claimed (Final Act. 8; Ans. 18; see Izzat ,r,r 25, 40, 71, 104). Izzat also discloses a visual quality measure for a stereo-image pair, the measure assessing "[d]isparity range and disparity rate of change [that] can have strong impact on a viewer's visual fatigue in stereo-image playback" (see Izzat ,r,r 22, 42; Ans. 17-18). Thus, Izzat's quality measure indicates a likelihood of visual discomfort in stereo-image playback, which is commensurate with the claimed "quality metric indicative of perceived 2 Appellants' Specification describes "a parameter for targeting the multiple views to the 3D display[, t]he parameter may for example be an offset, and/or a gain, applied to the views for targeting the views to the 3D display" (Spec. 9:6-8 (emphasis added)). For example, a "parameter may be applied to the views to control the amount of disparity" for "mapp[ing] onto a disparity range of the target display device" (Spec. 2:22-23, 7: 14--15). 4 Appeal2018-001940 Application 14/397 ,404 three dimensional image quality" and with the broad description of "quality metric" in Appellants' Specification (Ans. 18 (citing Izzat ,r,r 42, 59)). 3 Appellants contend Izzat does not teach the claimed quality metric based on a combination of image values of a first processed view and a further view, because Izzat's "processing [of views in Figure 3] happens in the subsequent step 340 using the accessed (unprocessed) images and a subsequently determined quality metric (from step 330)" (App. Br. 8; Reply Br. 5---6). We are unpersuaded because the claim "term 'processed' ... is a broad term that is deemed to be met by ... [Izzat' s] stereo pair of images"- previously adjusted/processed for disparity in post-production (Ans. 19; see Izzat ,r 39, Fig. 2). Thus, Izzat's stereo pair of images teaches the claimed "first processed view" and "further view [that] is ... a second processed view based on the three dimensional image data adapted by the parameter" (Ans. 19, 21). Additionally, Izzat's quality metric (quality measure) is based on a combination of image values of the first processed view and the further view (Izzat's stereo pair of images), as claimed (Ans. 20-21 (citing Izzat ,r,r 42, 59)). We are also not persuaded by Appellants' argument that Izzat does not teach determining a preferred parameter value by performing determining and calculating/or multiple values of the parameter, as required 3 Appellants' Specification provides "a quality metric is calculated indicative of perceived 3D image quality" of a "3D display [that] can be ... any stereoscopic display (STD)," the quality metric being "based on the combination of image values" (see Spec. 7:7-11, 9:11-12). The "quality metric may effectively be calculated in horizontal direction of the images" because "disparity differences always occur in horizontal direction corresponding to the orientation of the eyes of viewers" (see Spec. 14:28-31). 5 Appeal2018-001940 Application 14/397 ,404 by claim 1 (App. Br. 6, 11). Appellants argue "the flowchart of FIG. 3 in Izzat ... is not iterative and does not indicate any feedback" (App. Br. 10- 11 ). Izzat, however, expressly discloses "modify[ing] [in an image adjuster device] the images using one or more of the techniques described with respect to FIGS. 3-5, and provid[ing] the modified picture(s) .... to a display device" (Izzat ,r 104 ( emphasis added)). Izzat' s disparity adjustment techniques described with respect to Figures 3-5 include "shifting in the x-direction, scaling+shifting, and/or interpolation" (see Izzat ,r,r 64, 75-76). For example, "if we want to increase the parallax [ enlarge disparity range] between the stereo-image pair by 1.5 times, then we scale the disparity map by 1.5 and use it for image warping" followed by "natural coordinate interpolation" (see Izzat ,r 75). Accordingly, we find that a person of ordinary skill in the art reading Izzat's disclosure would "at once envisage" that, if a disparity adjustment has not produced good results for a stereo-image pair ( e.g., good visual quality without exposed occlusions, see Izzat ,r,r 64, 70), then a second disparity adjustment would be performed for that stereo-image pair (see Blue Calypso, LLCv. Groupon, Inc., 815 F.3d 1331, 1344 (Fed. Cir. 2016) ("[A] reference may still anticipate if that reference teaches that the disclosed components or functionalities may be combined and one of skill in the art would be able to implement the combination." (Citing Kennametal, 780 F.3d at 1383))). We, therefore, agree with the Examiner that Izzat discloses determining a preferred parameter value (a preferred disparity adjustment) by determining and calculating/or multiple parameter values, as required by claim 1 (Ans. 22-23). 6 Appeal2018-001940 Application 14/397 ,404 Thus, for the above reasons, we sustain the Examiner's anticipation rejection of independent claim 1, independent claims 13 and 15 argued for substantially the same reasons (App. Br. 14), and dependent claims 3, 9--12, 14, 16, 18, and 19, argued for their dependency on claims 1 and 13 (App. Br. 18). Claim 5 Appellants contend Izzat does not disclose a parameter used for adapting three dimensional data to a three dimensional display by "at least one of: an offset; a gain; and a type of scaling" as recited in claim 5 (Reply Br. 7; App. Br. 13). We are not persuaded by Appellants' argument and note Izzat' s disparity adjustment (parameter) may be "a type of scaling" as claimed (see Izzat ,r,r 64 ("methods for adjusting disparity information .... include ... scaling+shifting"), 7 5 ("scale the disparity map by 1. 5 and use it for image warping")). As Appellants' argument has not persuaded us of error, we sustain the Examiner's anticipation rejection of claim 5. Rejection of claims 4, 6-8, 17, 20, and 21 under§ 103(a) Claim 4 Appellants contend the combination of Izzat and Chen does not teach the use of a sharpness calculation in a quality metric based on a combination of image values, as required by claim 4 (Reply Br. 8; App. Br. 15-16). Appellants also argue Chen "teaches away from fusing different aspects of an image's quality which are measured separately such as contrast, sharpness, resolution, geometry, pose, etc." (Reply Br. 8.) We are not persuaded by Appellants' arguments. As the Examiner explains, Chen discloses the benefit of using a sharpness metric for 7 Appeal2018-001940 Application 14/397 ,404 assessing image quality (Ans. 25; Final Act. 16; see Chen ,r,r 5---6, 25). We agree with the Examiner that a skilled artisan would have recognized the benefit of using the sharpness calculation taught by Chen on Izzat's stereo-image pair, to determine the likelihood of visual discomfort in stereo-image playback (Ans. 19, 25). We also find Appellants' teaching away arguments are without merit (see Reply Br. 8-9). Chen's failure to disclose a metric based on a combination of views is not a criticism against the use of such metrics. Mere description of an implementation in the prior art that differs from Appellants' claimed invention, without more, does not show the prior art is "teaching away" from the claimed invention (see In re Fulton, 391 F.3d 1195, 1201 (Fed. Cir. 2004)). Accordingly, we are not persuaded the Examiner erred in finding the prior art of record suggests a quality metric based on a sharpness calculation of a combination of image values as recited in claim 4. We, therefore, sustain the Examiner's obviousness rejection of claim 4. Claims 6 and 21 Appellants contend the combination of Izzat and Chen does not teach a quality metric based on "a central area formed by ignoring border zones," as "[ n ]either Izzat nor Chen teaches or makes obvious that a device or method should ignore border zones" (Reply Br. 9; App. Br. 17). Appellants' argument is not persuasive because claims 6 and 21 do not specify or impose any bounds on the claimed "central area" and "border zones." Therefore, "any area of Izzat-Chen that does not include the 'border zone' ... [ such as] any portion or outer edge of the screen, is considered to be formed or labeled as a 'central portion"' (Ans. 26). Additionally, Izzat's 8 Appeal2018-001940 Application 14/397 ,404 paragraph 60 discloses the quality metric may "focus[] on object-related rates of disparity change" and "maximum (and/or minimum) disparity ... associated with different objects," also suggesting a quality metric based on a central area (e.g., objects in the image's interior) formed by ignoring border zones (image margins away from the interior objects) (see Izzat ,r 60). Accordingly, we are not persuaded the Examiner erred in finding the prior art of record teaches or suggests the quality metric recited in claims 6 and 21. Therefore, we sustain the Examiner's obviousness rejection of dependent claims 6 and 21. Additionally, we sustain the Examiner's obviousness rejection of dependent claims 7, 8, and 20, argued for their dependency on independent claims 1 and 13 (App. Br. 18). Claim 17 Claim 17 recites calculating "the quality metric by applying a weighting on the combination of image values in dependence on corresponding depth values." The Examiner finds the combination of Izzat and Chen teaches the limitations of claim 1 7 because "Chen discloses general use of weighting when assessing quality of images using sharpness and Izzat teaches assessment of quality of 3D images (inherently including depth values[)]" (Ans. 27; Final Act. 19 (citing Chen ,r 83)). Appellants argue although Chen mentions a weighting, the weighting is not "a weighting on a combination of image values in dependence on corresponding depth values" as recited in claim 17 (App. Br. 18). We agree with Appellants. Chen teaches a "quality score is based on weighted summation of probabilities, entropy or weighted summation of logarithm of probabilities," the "probabilities" reflecting similarity between a generic face model and segments of a face image, and the "quality score" detecting if the 9 Appeal2018-001940 Application 14/397 ,404 face image "is sufficient for face recognition" (see Chen ,r,r 76, 83-84). Thus, Chen's quality score does not include a quality metric that applies a weighting to a combination of image values in dependence on corresponding depth values, as claimed (Reply Br. 10). Izzat does not cure this deficiency (Reply Br. 10). The Examiner has not identified sufficient evidence in the combination of Chen and Izzat to support a rejection of obviousness. Therefore, we do not sustain the Examiner's rejection of dependent claim 17. DECISION The Examiner's decision rejecting claims 1, 3, 5, 9-16, 18, and 19 under 35 U.S.C. § 102(e) is affirmed. The Examiner's decision rejecting claims 4, 6-8, 20, and 21 under 35 U.S.C. § 103(a) is affirmed. The Examiner's decision rejecting claim 17 under 35 U.S.C. § 103(a) is reversed. No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a)(l )(iv). AFFIRMED-IN-PART 10 Copy with citationCopy as parenthetical citation