Bayerische Motoren Werke AktiengesellschaftDownload PDFPatent Trials and Appeals BoardJun 21, 20212020001404 (P.T.A.B. Jun. 21, 2021) Copy Citation UNITED STATES PATENT AND TRADEMARK OFFICE UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www.uspto.gov APPLICATION NO. FILING DATE FIRST NAMED INVENTOR ATTORNEY DOCKET NO. CONFIRMATION NO. 14/699,176 04/29/2015 Bernhard GASSER 080437.67712US 2750 23911 7590 06/21/2021 CROWELL & MORING LLP INTELLECTUAL PROPERTY GROUP P.O. BOX 14300 WASHINGTON, DC 20044-4300 EXAMINER ROSARIO, NELSON M ART UNIT PAPER NUMBER 2624 NOTIFICATION DATE DELIVERY MODE 06/21/2021 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address(es): edocket@crowell.com mloren@crowell.com tche@crowell.com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE ____________ BEFORE THE PATENT TRIAL AND APPEAL BOARD ____________ Ex parte BERNHARD GASSER ____________ Appeal 2020-001404 Application 14/699,176 Technology Center 2600 ____________ Before KARL D. EASTHOM, SCOTT B. HOWARD, and STEVEN M. AMUNDSON, Administrative Patent Judges. EASTHOM, Administrative Patent Judge. DECISION ON APPEAL I. STATEMENT OF THE CASE Appellant1 appeals under 35 U.S.C. § 134(a) from the Examiner’s final rejection of claims 1–12, which constitute all of the claims pending in this application. We have jurisdiction under 35 U.S.C. § 6(b). We AFFIRM. 1 We use the word “Appellant” to refer to “applicant” as defined in 37 C.F.R. § 1.42. Appellant identifies the real party in interest as Bayerische Motoren Werke Aktiengesellschaft. Appeal Br. 1. Appeal 2020-001404 Application 14/699,176 3 II. DISCLOSED AND CLAIMED SUBJECT MATTER The Specification discloses a method “for determining an input position on a touch-sensitive display by a user” by detecting the “position of the eyes relative to at least a portion of the touch-sensitive display . . . and the [] position of the contact.” Spec. Abstract. Figures 3A and 3B follow: Appeal 2020-001404 Application 14/699,176 4 Figures 3A and 3B above show a “camera 7” that “records at least the eyes of the user.” Spec. ¶ 31. From the recording, “the processing unit 6 determines the position of the eyes 5 of the user” and “also uses the known position of the camera 7 for this purpose.” Id. The “electronic processing unit 6” also “receives information about the position of the contact and/or information about the touch area from the touch-sensitive display 1.” Id. ¶ 32. Then, from the “received data, the electronic processing unit 6 determines the angles W1 and W2, based on which the touch position is then corrected so as to determine the input position.” Spec. ¶ 33. After determining the angles W1 and W2, the “processing unit 6 thereafter determines the input position proceeding from the touch position and the angles W1 and W2.” Id. ¶ 36. Then, in an example, “the touch position is shifted along the projection 8 by a magnitude determined from the angle W1.” Id. Independent claim 1 follows: 1. A method for user-interaction with a touch-sensitive display, the method comprising the acts of: detecting a position of eyes of the user; detecting a touch area that is where the user is touching the touch-sensitive display, the touch area being offset from an input position along the touch-sensitive display, which input position is where the user intends to touch the touch-sensitive display; determining the input position based on the detected position of the eyes relative to at least a portion of the touch- sensitive display and the detected touch area; and providing user-interaction with content displayed at the determined input position as if the user is touching the determined input position rather than the touch area. Appeal 2020-001404 Application 14/699,176 5 III. REFERENCES The prior art relied upon by the Examiner as evidence in rejecting the claims on appeal is: Name Reference Date Yee US 2011/0254865 A1 Oct. 20, 2011 Shimotani US 2011/0285657 A1 Nov. 24, 2011 Hamalainen US 2014/0085202 A1 Mar. 27, 2014 IV. REJECTION Claims 1–12 stand rejected under 35 U.S.C. § 103(a) as unpatentable over Yee, Shimotani, and Hamalainen. Final Act. 4. V. OPINION The Examiner rejects claims 1–12 as obvious over the combined teachings as set forth above. See Final Act. 4–10. Appellant treats the claims as a group and relies on arguments it presents for claim 1. See Appeal Br. 4–6; Reply Br. 1–3. Therefore, claim 1 represents the claims on appeal. Claim 1 recites (emphasis added) “detecting a touch area that is where the user is touching the touch-sensitive display, the touch area being offset from an input position along the touch-sensitive display, which input position is where the user intends to touch the touch-sensitive display” and “providing user-interaction with content displayed at the determined input position as if the user is touching the determined input position rather than the touch area.” Appeal 2020-001404 Application 14/699,176 6 The Examiner relies on Yee to disclose many of the limitations of claim 1 and contends that it would have been obvious to modify Yee’s system using Shimotani’s system in order to improve the operation of Yee’s display. See Final Act. 4–7. The Examiner also relies Hamalainen. See id. at 7–8. Appellant directs its arguments to alleged deficiencies in the individual teachings of the prior art, including Shimotani, with respect to the “providing . . . rather than” limitation. Appeal Br. 5. Because we determine that Shimotani teaches the sole disputed limitation at issue here as the Examiner found, we do not address Appellant’s other arguments alleging that Hamalainen and Yee also fail to teach the disputed limitation. See id. at 4–6. The Examiner relies on Shimotani to teach the sole disputed claim 1 limitation at issue here: “providing user-interaction . . . at the determined input position as if the user is touching the determined input position rather than the touch area.” Final Act. 5–6 (citing Shimotani Figs. 2, 9). The Examiner finds that Shimotani teaches “input touch area Point A” and “determined input position of point B.” Ans. 5 (citing Shimotani ¶ 53, Figs. 5, 9) (emphasis omitted). Appeal 2020-001404 Application 14/699,176 7 Shimotani Figure 5 follows: Figure 5 above shows the “principle behind a coordinate position correction made by the display input device.” Shimotani ¶ 20. Appeal 2020-001404 Application 14/699,176 8 Shimotani Figure 9 follows: Figure 9 above shows “an example of a screen configuration displayed by a conventional display input device.” Shimotani ¶ 24. Figures 5 and 9 operate under similar principles, where Figure 5 represents Shimotani’s improvement over the prior art represented by Figure 9 in situations involving different mounting angles for the disclosed displays. See Shimotani ¶¶ 8–13. In both cases, as explained further below, Shimotani’s detection system enlarges the user’s intended icon (as Figure 9 shows) as the user’s finger approaches the keyboard. See id. ¶¶ 8–9, 64. No dispute exists over how Shimotani’s system operates. See, e.g., Appeal Br. 5 (explaining that Shimotani’s invention as described with respect to Figures 5 and 6 improves the prior art represented by Figure 9 by providing “a more Appeal 2020-001404 Application 14/699,176 9 accurate enlargement”); Ans. 5–6 (showing enlarged icon “se” expanded as the user’s intended icon based on user’s line of sight from point C).2 With respect to Figure 5, Shimotani states that “point A shows the finger position which is calculated and outputted by the approaching coordinate position calculating unit 301 when the user brings his or her finger close to the touch panel 1,” and “point B shows the display position of the [intended] icon on the surface of the LCD panel 10 of the touch panel 1.” Shimotani ¶ 55. Shimotani discloses “[w]hen software key icons shown in FIG. 9 are cited as an example, the point B shows the display position of ‘. . . (se).’” Id. Shimotani explains how “the coordinate position correcting unit 303 outputs the coordinates (x2, y2) which are corrected . . . to the main control unit 300 and the image information creating unit 304.” Id. ¶ 64. The image information creating unit then “carries out an enlargement process of enlarging an image in a display area having a fixed range . . . for displaying the image in the vicinity of the finger position.” Id. Appellant argues that Shimotani does not teach the claimed user interaction at “the input position as if it had been touched by the user instead of the touch position,” where the “input position is where the user intends to touch the touch-screen” and the touch area is “where the user actually touches the touch screen.” Appeal Br. 3–4.3 According to Appellant, the 2 The word “se” refers to the following icon in Shimotani’s Figure 9: . Shimotani ¶¶ 8, 65 (listing phonetic words for Japanese-language characters in Figure 9), Fig. 9. 3 Claim 1 defines the “touch area” and “determined input position” in earlier steps, as follows: “detecting a touch area that is where the user is touching the touch-sensitive display” and an “input position is where the user intends to touch the touch-sensitive display.” Appeal 2020-001404 Application 14/699,176 10 claimed phrasing “as if . . . rather than” requires “that the user is touching the touch area and not the input position.” Id. at 5; see also Reply Br. 1–2. Appellant argues that “Shimotani resizes graphical elements of a touch display based on where the user is looking,” but “once the user touches the enlarged keys of the touch-screen,” the input is registered “as where the user is actually touching the enlarged keys.” Appeal Br. 5; see also Reply Br. 1– 2. These arguments do not undermine the Examiner’s showing. Appellant fails to explain clearly why claim 1 avoids Shimotani’s system that includes enlarging an intended icon, as the Examiner determined. Appellant defines the “as if . . . rather than” language as “the requirement that the user is touching the touch area and not the input position.” Appeal Br. 5. Appellant contends that “[b]ecause, in Shimotani, the input position is adjusted to become the touch area, Shimotani does not meet the claim language.” Id. Contrary to this argument, Shimotani’s system does not adjust the original “input position,” which, according to claim 1, is “where the user intends to touch the touch-sensitive display.” See supra note 3. Rather, as explained below, Shimotani’s user intends originally to touch a single unenlarged icon. See, e.g., Shimotani, Fig. 9. That is, as indicated above, Shimotani discusses a display position of an icon, such as “(se),” and then describes enlarging the “(se)” icon to be in the vicinity of the finger by using eye position data. See Shimotani ¶¶ 53– 55, 64. In other words, Shimotani’s unenlarged icon represents “the determined input position” as the icon a user intends to touch. See Shimotani, Fig. 9; supra note 3. Then, based on user-interaction and eye position, Shimotani’s system enlarges the icon such that touching the “touch Appeal 2020-001404 Application 14/699,176 11 area” (altered icon, or enlarged “(se)”) triggers “user-interaction with content displayed at the determined input position” (i.e., triggers the “se” function/content) “as if the user is touching the determined input position” (i.e., the position under the unaltered “se” icon) “rather than the touch area” (a position adjacent the unaltered “se” icon within the enlarged icon).4 See Final Act. 5–6 (citing Shimotani, Fig. 9); Ans. 5 (citing Shimotani, Figs. 5, 9). Similar to the language of claim 1, the Specification describes the “input position” as where “the user would like to touch on the surface of the touch screen.” Spec. ¶ 3. The Specification also describes shifting the “touch position.” See Spec. ¶ 36 (“the touch position is shifted along the projection 8 by a magnitude determined from the angle W1”), ¶ 15 (“It is possible to correct the offset in the position of the contact . . . by detecting and considering the position of the eyes of the user and the known position of . . . the touch position, or of a graphical element, such as a button.”), ¶ 18 (“for example by shifting the touch position or area”). The Specification also describes “not shift[ing]” the touch area or shifting it “only comparatively little” in some situations. See id. ¶ 37, Fig. 3B. In other words, the Specification contemplates overlapping the intended/determined “input position” with the actual “touch position.” See id. As described 4 In Shimotani, absent the corrective icon enlargement, the touching of the “touch area” otherwise would have invoked no function or another functional icon adjacent the “se” icon. See, e.g., Shimotani, Fig. 9 (showing enlarged icons slightly overlapping the positions of adjacent (unenlarged) icons). However, Appellant does not argue that the “rather than” language in claim 1 requires otherwise invoking other content pertaining (for example) to an icon adjacent the “se” icon in the absence of a correction process. See infra note 5 (defining what “rather than” requires). Appeal 2020-001404 Application 14/699,176 12 above, similar to the Specification, Shimotani describes shifting the area a user can touch in order to trigger content otherwise only triggered from the intended unenlarged icon.5 Nothing in claim 1 precludes using an enlarged icon so that the user is able to see the enlarged “touch area” of claim 1 during Shimotani’s corrective process. In summary, Appellant does not persuasively explain how the Examiner erred in finding that Shimotani teaches and suggests “providing user-interaction with content displayed at the determined input position as if the user is touching the determined input position rather than the touch area.” Based on the foregoing discussion, Appellant does not show error in the Examiner’s findings and determination of the obviousness of claim 1. As noted above, Appellant does not challenge claims 2–12 independently from claim 1. Accordingly, claims 2–12 stand with claim 1. VI. CONCLUSION We affirm the Examiner’s Final Action and sustain the § 103(a) rejection of claims 1–12. In summary: Claim(s) Rejected 35 U.S.C. § Reference(s)/Basis Affirmed Reversed 1–12 103(a) Yee, Shimotani, Hamalainen 1–12 5 The Specification does not explicitly employ the “rather than the touch area” language of claim 1. As noted above, Appellant defines the clause in the Appeal Brief. See Appeal Br. 5 (“Within the phraseology of ‘as if . . . rather than []’ is the requirement that the user is touching the touch area and not the input position.”). Appeal 2020-001404 Application 14/699,176 13 No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a). See 37 C.F.R. § 1.136(a)(1)(iv). AFFIRMED Copy with citationCopy as parenthetical citation